Test Report: Docker_Linux_crio 17711

                    
                      f28d1a49818b7f9a8aa01fc1422de67f34c38faf:2023-12-06:32174
                    
                

Test fail (6/315)

Order failed test Duration
35 TestAddons/parallel/Ingress 151.13
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 184.76
216 TestMultiNode/serial/PingHostFrom2Pods 3.18
232 TestPreload 29.37
238 TestRunningBinaryUpgrade 66.52
261 TestStoppedBinaryUpgrade/Upgrade 94.99
x
+
TestAddons/parallel/Ingress (151.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-906021 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-906021 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-906021 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3a631046-6926-490f-93d4-84f3d1bc7c71] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3a631046-6926-490f-93d4-84f3d1bc7c71] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.009692351s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-906021 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.719683444s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-906021 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-906021 addons disable ingress --alsologtostderr -v=1: (7.600022329s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-906021
helpers_test.go:235: (dbg) docker inspect addons-906021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d",
	        "Created": "2023-12-06T18:00:53.766828754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:00:54.061842876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:75d04379c0692a7c7580bf47e8a90f896e08db4459e8feaaa815f73da348a8e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d/hosts",
	        "LogPath": "/var/lib/docker/containers/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d-json.log",
	        "Name": "/addons-906021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-906021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-906021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2dba1130ad8ab91cf393aedb319333882c1a6da8fb6368a9646b5fd0c7a88d56-init/diff:/var/lib/docker/overlay2/ec06e12da6157da3a94af2b1665e4c856c3ea27be6944a5fef4fd2886cc68e28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2dba1130ad8ab91cf393aedb319333882c1a6da8fb6368a9646b5fd0c7a88d56/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2dba1130ad8ab91cf393aedb319333882c1a6da8fb6368a9646b5fd0c7a88d56/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2dba1130ad8ab91cf393aedb319333882c1a6da8fb6368a9646b5fd0c7a88d56/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-906021",
	                "Source": "/var/lib/docker/volumes/addons-906021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-906021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-906021",
	                "name.minikube.sigs.k8s.io": "addons-906021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "daa78a5b19812262fa7f24b38acdba7262bd9950652001df85c52ff4d9608308",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/daa78a5b1981",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-906021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ad8d52705d34",
	                        "addons-906021"
	                    ],
	                    "NetworkID": "aa1adf7625ad6521373b855dd5e7891e9dd4c9317af2e08b7e354e68bd137eaf",
	                    "EndpointID": "4e84a76d788c9a232903f59be96eae179739ef937750ee86844ad572a369dfa1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-906021 -n addons-906021
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-906021 logs -n 25: (1.162879003s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-480808                                                                     | download-only-480808   | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC | 06 Dec 23 18:00 UTC |
	| delete  | -p download-only-480808                                                                     | download-only-480808   | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC | 06 Dec 23 18:00 UTC |
	| start   | --download-only -p                                                                          | download-docker-248947 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |                     |
	|         | download-docker-248947                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-248947                                                                   | download-docker-248947 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC | 06 Dec 23 18:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-582843   | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |                     |
	|         | binary-mirror-582843                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46823                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-582843                                                                     | binary-mirror-582843   | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC | 06 Dec 23 18:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |                     |
	|         | addons-906021                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |                     |
	|         | addons-906021                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-906021 --wait=true                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC | 06 Dec 23 18:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:02 UTC | 06 Dec 23 18:02 UTC |
	|         | addons-906021                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:02 UTC | 06 Dec 23 18:02 UTC |
	|         | -p addons-906021                                                                            |                        |         |         |                     |                     |
	| ip      | addons-906021 ip                                                                            | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:02 UTC | 06 Dec 23 18:02 UTC |
	| addons  | addons-906021 addons disable                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:02 UTC | 06 Dec 23 18:02 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-906021 ssh curl -s                                                                   | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | addons-906021                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-906021 ssh cat                                                                       | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | /opt/local-path-provisioner/pvc-204e6b15-ce12-41b4-aed1-14c06d79cf42_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-906021 addons disable                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | -p addons-906021                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-906021 addons disable                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-906021 addons                                                                        | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:03 UTC | 06 Dec 23 18:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-906021 addons                                                                        | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:04 UTC | 06 Dec 23 18:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-906021 addons                                                                        | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:04 UTC | 06 Dec 23 18:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-906021 ip                                                                            | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:05 UTC | 06 Dec 23 18:05 UTC |
	| addons  | addons-906021 addons disable                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:05 UTC | 06 Dec 23 18:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-906021 addons disable                                                                | addons-906021          | jenkins | v1.32.0 | 06 Dec 23 18:05 UTC | 06 Dec 23 18:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:00:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:00:31.689744   17420 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:00:31.690008   17420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:31.690018   17420 out.go:309] Setting ErrFile to fd 2...
	I1206 18:00:31.690023   17420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:31.690292   17420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:00:31.690977   17420 out.go:303] Setting JSON to false
	I1206 18:00:31.691843   17420 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2581,"bootTime":1701883051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:00:31.691900   17420 start.go:138] virtualization: kvm guest
	I1206 18:00:31.694400   17420 out.go:177] * [addons-906021] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:00:31.696087   17420 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:00:31.696083   17420 notify.go:220] Checking for updates...
	I1206 18:00:31.697673   17420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:00:31.699224   17420 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:00:31.700630   17420 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:00:31.701960   17420 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:00:31.703284   17420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:00:31.705022   17420 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:00:31.724836   17420 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:00:31.724948   17420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:31.772624   17420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:31.764722726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:31.772713   17420 docker.go:295] overlay module found
	I1206 18:00:31.774610   17420 out.go:177] * Using the docker driver based on user configuration
	I1206 18:00:31.776129   17420 start.go:298] selected driver: docker
	I1206 18:00:31.776147   17420 start.go:902] validating driver "docker" against <nil>
	I1206 18:00:31.776158   17420 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:00:31.776930   17420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:31.827497   17420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:31.819938748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:31.827896   17420 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:00:31.828187   17420 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:00:31.830267   17420 out.go:177] * Using Docker driver with root privileges
	I1206 18:00:31.831945   17420 cni.go:84] Creating CNI manager for ""
	I1206 18:00:31.831965   17420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:00:31.831980   17420 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 18:00:31.832008   17420 start_flags.go:323] config:
	{Name:addons-906021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-906021 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:00:31.833734   17420 out.go:177] * Starting control plane node addons-906021 in cluster addons-906021
	I1206 18:00:31.835085   17420 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:00:31.836439   17420 out.go:177] * Pulling base image ...
	I1206 18:00:31.837981   17420 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:31.838002   17420 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:00:31.838019   17420 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:31.838032   17420 cache.go:56] Caching tarball of preloaded images
	I1206 18:00:31.838125   17420 preload.go:174] Found /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 18:00:31.838137   17420 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 18:00:31.838426   17420 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/config.json ...
	I1206 18:00:31.838451   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/config.json: {Name:mkf68501d31a179970eb26838df8da7f338b022a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:31.852610   17420 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f to local cache
	I1206 18:00:31.852740   17420 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory
	I1206 18:00:31.852757   17420 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory, skipping pull
	I1206 18:00:31.852761   17420 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in cache, skipping pull
	I1206 18:00:31.852771   17420 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f as a tarball
	I1206 18:00:31.852776   17420 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f from local cache
	I1206 18:00:44.646621   17420 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f from cached tarball
	I1206 18:00:44.646696   17420 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:00:44.646742   17420 start.go:365] acquiring machines lock for addons-906021: {Name:mkd27b887ae3324f94808b348cbdcfda8f9e647f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:00:44.646866   17420 start.go:369] acquired machines lock for "addons-906021" in 103.018µs
	I1206 18:00:44.646898   17420 start.go:93] Provisioning new machine with config: &{Name:addons-906021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-906021 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:00:44.647003   17420 start.go:125] createHost starting for "" (driver="docker")
	I1206 18:00:44.780872   17420 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1206 18:00:44.781128   17420 start.go:159] libmachine.API.Create for "addons-906021" (driver="docker")
	I1206 18:00:44.781169   17420 client.go:168] LocalClient.Create starting
	I1206 18:00:44.781280   17420 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem
	I1206 18:00:44.845503   17420 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem
	I1206 18:00:44.917824   17420 cli_runner.go:164] Run: docker network inspect addons-906021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 18:00:44.933180   17420 cli_runner.go:211] docker network inspect addons-906021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 18:00:44.933252   17420 network_create.go:281] running [docker network inspect addons-906021] to gather additional debugging logs...
	I1206 18:00:44.933274   17420 cli_runner.go:164] Run: docker network inspect addons-906021
	W1206 18:00:44.949770   17420 cli_runner.go:211] docker network inspect addons-906021 returned with exit code 1
	I1206 18:00:44.949802   17420 network_create.go:284] error running [docker network inspect addons-906021]: docker network inspect addons-906021: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-906021 not found
	I1206 18:00:44.949817   17420 network_create.go:286] output of [docker network inspect addons-906021]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-906021 not found
	
	** /stderr **
	I1206 18:00:44.949922   17420 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:00:44.969274   17420 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000bc2b00}
	I1206 18:00:44.969314   17420 network_create.go:124] attempt to create docker network addons-906021 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 18:00:44.969365   17420 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-906021 addons-906021
	I1206 18:00:45.212902   17420 network_create.go:108] docker network addons-906021 192.168.49.0/24 created
	I1206 18:00:45.212934   17420 kic.go:121] calculated static IP "192.168.49.2" for the "addons-906021" container
	I1206 18:00:45.212997   17420 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:00:45.228021   17420 cli_runner.go:164] Run: docker volume create addons-906021 --label name.minikube.sigs.k8s.io=addons-906021 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:00:45.300408   17420 oci.go:103] Successfully created a docker volume addons-906021
	I1206 18:00:45.300484   17420 cli_runner.go:164] Run: docker run --rm --name addons-906021-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-906021 --entrypoint /usr/bin/test -v addons-906021:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib
	I1206 18:00:48.545778   17420 cli_runner.go:217] Completed: docker run --rm --name addons-906021-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-906021 --entrypoint /usr/bin/test -v addons-906021:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib: (3.245253307s)
	I1206 18:00:48.545816   17420 oci.go:107] Successfully prepared a docker volume addons-906021
	I1206 18:00:48.545854   17420 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:48.545881   17420 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 18:00:48.545937   17420 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-906021:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 18:00:53.699702   17420 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-906021:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.153721017s)
	I1206 18:00:53.699735   17420 kic.go:203] duration metric: took 5.153851 seconds to extract preloaded images to volume
	W1206 18:00:53.699855   17420 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:00:53.699949   17420 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:00:53.752726   17420 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-906021 --name addons-906021 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-906021 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-906021 --network addons-906021 --ip 192.168.49.2 --volume addons-906021:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:00:54.070162   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Running}}
	I1206 18:00:54.086477   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:00:54.102939   17420 cli_runner.go:164] Run: docker exec addons-906021 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:00:54.144425   17420 oci.go:144] the created container "addons-906021" has a running status.
	I1206 18:00:54.144464   17420 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa...
	I1206 18:00:54.254419   17420 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:00:54.274265   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:00:54.291346   17420 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:00:54.291366   17420 kic_runner.go:114] Args: [docker exec --privileged addons-906021 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:00:54.359380   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:00:54.375429   17420 machine.go:88] provisioning docker machine ...
	I1206 18:00:54.375463   17420 ubuntu.go:169] provisioning hostname "addons-906021"
	I1206 18:00:54.375526   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:54.398294   17420 main.go:141] libmachine: Using SSH client type: native
	I1206 18:00:54.398840   17420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1206 18:00:54.398868   17420 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-906021 && echo "addons-906021" | sudo tee /etc/hostname
	I1206 18:00:54.400567   17420 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46984->127.0.0.1:32772: read: connection reset by peer
	I1206 18:00:57.529773   17420 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-906021
	
	I1206 18:00:57.529852   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:57.546626   17420 main.go:141] libmachine: Using SSH client type: native
	I1206 18:00:57.547001   17420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1206 18:00:57.547027   17420 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-906021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-906021/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-906021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:00:57.664189   17420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:00:57.664221   17420 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:00:57.664244   17420 ubuntu.go:177] setting up certificates
	I1206 18:00:57.664256   17420 provision.go:83] configureAuth start
	I1206 18:00:57.664334   17420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-906021
	I1206 18:00:57.681315   17420 provision.go:138] copyHostCerts
	I1206 18:00:57.681396   17420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:00:57.681521   17420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:00:57.681599   17420 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:00:57.681671   17420 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.addons-906021 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-906021]
	I1206 18:00:57.739384   17420 provision.go:172] copyRemoteCerts
	I1206 18:00:57.739453   17420 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:00:57.739495   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:57.757560   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:00:57.848428   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:00:57.869760   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1206 18:00:57.890942   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 18:00:57.911298   17420 provision.go:86] duration metric: configureAuth took 247.030027ms
	I1206 18:00:57.911325   17420 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:00:57.911479   17420 config.go:182] Loaded profile config "addons-906021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:00:57.911574   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:57.927565   17420 main.go:141] libmachine: Using SSH client type: native
	I1206 18:00:57.927883   17420 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1206 18:00:57.927904   17420 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:00:58.129687   17420 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:00:58.129710   17420 machine.go:91] provisioned docker machine in 3.754258391s
	I1206 18:00:58.129721   17420 client.go:171] LocalClient.Create took 13.348545427s
	I1206 18:00:58.129757   17420 start.go:167] duration metric: libmachine.API.Create for "addons-906021" took 13.348629626s
	I1206 18:00:58.129767   17420 start.go:300] post-start starting for "addons-906021" (driver="docker")
	I1206 18:00:58.129781   17420 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:00:58.129851   17420 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:00:58.129898   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:58.145559   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:00:58.232471   17420 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:00:58.235348   17420 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:00:58.235377   17420 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:00:58.235386   17420 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:00:58.235392   17420 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:00:58.235403   17420 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:00:58.235455   17420 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:00:58.235476   17420 start.go:303] post-start completed in 105.70324ms
	I1206 18:00:58.236583   17420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-906021
	I1206 18:00:58.252674   17420 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/config.json ...
	I1206 18:00:58.252966   17420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:00:58.253005   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:58.269430   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:00:58.352642   17420 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:00:58.356454   17420 start.go:128] duration metric: createHost completed in 13.709437903s
	I1206 18:00:58.356481   17420 start.go:83] releasing machines lock for "addons-906021", held for 13.709599283s
	I1206 18:00:58.356542   17420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-906021
	I1206 18:00:58.371695   17420 ssh_runner.go:195] Run: cat /version.json
	I1206 18:00:58.371736   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:58.371815   17420 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:00:58.371862   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:00:58.390924   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:00:58.391213   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:00:58.562600   17420 ssh_runner.go:195] Run: systemctl --version
	I1206 18:00:58.566653   17420 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:00:58.703135   17420 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:00:58.707094   17420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:00:58.724262   17420 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:00:58.724354   17420 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:00:58.750130   17420 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:00:58.750159   17420 start.go:475] detecting cgroup driver to use...
	I1206 18:00:58.750196   17420 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:00:58.750240   17420 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:00:58.763333   17420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:00:58.773024   17420 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:00:58.773077   17420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:00:58.784954   17420 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:00:58.797025   17420 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:00:58.870515   17420 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:00:58.945226   17420 docker.go:219] disabling docker service ...
	I1206 18:00:58.945274   17420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:00:58.961992   17420 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:00:58.972972   17420 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:00:59.045360   17420 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:00:59.128608   17420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:00:59.138439   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:00:59.152350   17420 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 18:00:59.152401   17420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:00:59.161019   17420 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:00:59.161075   17420 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:00:59.170041   17420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:00:59.178990   17420 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:00:59.187839   17420 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:00:59.196024   17420 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:00:59.203432   17420 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:00:59.211028   17420 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:00:59.287171   17420 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:00:59.397925   17420 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:00:59.398009   17420 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:00:59.401320   17420 start.go:543] Will wait 60s for crictl version
	I1206 18:00:59.401366   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:00:59.404248   17420 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:00:59.434110   17420 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 18:00:59.434221   17420 ssh_runner.go:195] Run: crio --version
	I1206 18:00:59.467388   17420 ssh_runner.go:195] Run: crio --version
	I1206 18:00:59.501501   17420 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1206 18:00:59.503066   17420 cli_runner.go:164] Run: docker network inspect addons-906021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:00:59.518103   17420 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 18:00:59.521410   17420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:00:59.530848   17420 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:59.530911   17420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:00:59.581911   17420 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 18:00:59.581934   17420 crio.go:415] Images already preloaded, skipping extraction
	I1206 18:00:59.581979   17420 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:00:59.611466   17420 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 18:00:59.611490   17420 cache_images.go:84] Images are preloaded, skipping loading
	I1206 18:00:59.611543   17420 ssh_runner.go:195] Run: crio config
	I1206 18:00:59.650833   17420 cni.go:84] Creating CNI manager for ""
	I1206 18:00:59.650854   17420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:00:59.650872   17420 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:00:59.650888   17420 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-906021 NodeName:addons-906021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:00:59.651014   17420 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-906021"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:00:59.651069   17420 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-906021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-906021 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:00:59.651112   17420 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 18:00:59.658757   17420 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:00:59.658828   17420 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:00:59.666363   17420 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1206 18:00:59.681890   17420 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 18:00:59.697383   17420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1206 18:00:59.713159   17420 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 18:00:59.716502   17420 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:00:59.726466   17420 certs.go:56] Setting up /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021 for IP: 192.168.49.2
	I1206 18:00:59.726517   17420 certs.go:190] acquiring lock for shared ca certs: {Name:mk88da27ec99c860f0c2ad3f4fab21b90cf40c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.726646   17420 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key
	I1206 18:00:59.835889   17420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt ...
	I1206 18:00:59.835926   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt: {Name:mk08fcfb014b3884727abd8f92fa7dc2b7ad001c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.836144   17420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key ...
	I1206 18:00:59.836160   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key: {Name:mk24ede593d55c7425a16f1c1d02789f81ce7c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.836302   17420 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key
	I1206 18:00:59.890937   17420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt ...
	I1206 18:00:59.890969   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt: {Name:mk078eff7ab0ea9ca35bf25e6852d64a835b827c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.891151   17420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key ...
	I1206 18:00:59.891165   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key: {Name:mk849d7f2bbc4b3cc9666a3e62d833b23d739562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.891286   17420 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.key
	I1206 18:00:59.891301   17420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt with IP's: []
	I1206 18:00:59.978332   17420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt ...
	I1206 18:00:59.978379   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: {Name:mk9f2716b5f3fae059adb1221be0199a5545eb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.978568   17420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.key ...
	I1206 18:00:59.978598   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.key: {Name:mkaf9f7f420b4c390dde76bce308ada49e2ac31a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:00:59.978695   17420 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key.dd3b5fb2
	I1206 18:00:59.978713   17420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:01:00.118617   17420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt.dd3b5fb2 ...
	I1206 18:01:00.118653   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt.dd3b5fb2: {Name:mk0f1e5db8de78222080ee54c1acdacaa9459dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:00.118838   17420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key.dd3b5fb2 ...
	I1206 18:01:00.118855   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key.dd3b5fb2: {Name:mkfa54cafb7ec3fb3cf9447a8ede34ca1bc91c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:00.118953   17420 certs.go:337] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt
	I1206 18:01:00.119040   17420 certs.go:341] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key
	I1206 18:01:00.119103   17420 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.key
	I1206 18:01:00.119119   17420 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.crt with IP's: []
	I1206 18:01:00.322213   17420 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.crt ...
	I1206 18:01:00.322244   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.crt: {Name:mkc2f937d374999db0a40f7fd021e92dcd7eb41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:00.322426   17420 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.key ...
	I1206 18:01:00.322443   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.key: {Name:mk423e2a84ab8792aa3ad147e0f980794d8d3e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:00.322642   17420 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 18:01:00.322683   17420 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem (1078 bytes)
	I1206 18:01:00.322707   17420 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:01:00.322739   17420 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem (1675 bytes)
	I1206 18:01:00.323321   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:01:00.345050   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 18:01:00.366033   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:01:00.387288   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 18:01:00.409193   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:01:00.430039   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 18:01:00.450392   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:01:00.471274   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:01:00.491473   17420 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:01:00.511872   17420 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:01:00.527150   17420 ssh_runner.go:195] Run: openssl version
	I1206 18:01:00.531976   17420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:01:00.540444   17420 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:01:00.543567   17420 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:01:00.543632   17420 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:01:00.549869   17420 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:01:00.558363   17420 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:01:00.561374   17420 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:01:00.561414   17420 kubeadm.go:404] StartCluster: {Name:addons-906021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-906021 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:01:00.561472   17420 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 18:01:00.561507   17420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 18:01:00.592982   17420 cri.go:89] found id: ""
	I1206 18:01:00.593040   17420 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:01:00.600763   17420 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:01:00.608045   17420 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1206 18:01:00.608093   17420 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:01:00.615095   17420 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:01:00.615135   17420 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 18:01:00.656964   17420 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 18:01:00.657154   17420 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:01:00.689578   17420 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:01:00.689667   17420 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1206 18:01:00.689746   17420 kubeadm.go:322] OS: Linux
	I1206 18:01:00.689828   17420 kubeadm.go:322] CGROUPS_CPU: enabled
	I1206 18:01:00.689899   17420 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1206 18:01:00.689970   17420 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1206 18:01:00.690030   17420 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1206 18:01:00.690108   17420 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1206 18:01:00.690194   17420 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1206 18:01:00.690276   17420 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1206 18:01:00.690346   17420 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1206 18:01:00.690407   17420 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1206 18:01:00.748021   17420 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:01:00.748117   17420 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:01:00.748199   17420 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:01:00.927157   17420 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:01:00.930264   17420 out.go:204]   - Generating certificates and keys ...
	I1206 18:01:00.930402   17420 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:01:00.930508   17420 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:01:01.040459   17420 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:01:01.340712   17420 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:01:01.517371   17420 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:01:01.682528   17420 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:01:01.830962   17420 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:01:01.831212   17420 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-906021 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:01:02.033032   17420 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:01:02.033168   17420 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-906021 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:01:02.190564   17420 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:01:02.371812   17420 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:01:02.547003   17420 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:01:02.547100   17420 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:01:02.643635   17420 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:01:02.748307   17420 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:01:02.822061   17420 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:01:02.889197   17420 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:01:02.889735   17420 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:01:02.891876   17420 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:01:02.894215   17420 out.go:204]   - Booting up control plane ...
	I1206 18:01:02.894392   17420 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:01:02.894494   17420 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:01:02.895784   17420 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:01:02.903529   17420 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:01:02.904332   17420 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:01:02.904377   17420 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:01:02.987628   17420 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:01:07.989433   17420 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001879 seconds
	I1206 18:01:07.989556   17420 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:01:08.000999   17420 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:01:08.520829   17420 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:01:08.521009   17420 kubeadm.go:322] [mark-control-plane] Marking the node addons-906021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 18:01:09.030932   17420 kubeadm.go:322] [bootstrap-token] Using token: 67ds51.lnlc4yei23g2ww4m
	I1206 18:01:09.032408   17420 out.go:204]   - Configuring RBAC rules ...
	I1206 18:01:09.032577   17420 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:01:09.036138   17420 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:01:09.042136   17420 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:01:09.044949   17420 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:01:09.048842   17420 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:01:09.052065   17420 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:01:09.063594   17420 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:01:09.272012   17420 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 18:01:09.441276   17420 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 18:01:09.442410   17420 kubeadm.go:322] 
	I1206 18:01:09.442525   17420 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 18:01:09.442564   17420 kubeadm.go:322] 
	I1206 18:01:09.442690   17420 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 18:01:09.442704   17420 kubeadm.go:322] 
	I1206 18:01:09.442738   17420 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 18:01:09.442871   17420 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:01:09.442941   17420 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:01:09.442952   17420 kubeadm.go:322] 
	I1206 18:01:09.443024   17420 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 18:01:09.443033   17420 kubeadm.go:322] 
	I1206 18:01:09.443130   17420 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 18:01:09.443147   17420 kubeadm.go:322] 
	I1206 18:01:09.443221   17420 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 18:01:09.443346   17420 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:01:09.443451   17420 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:01:09.443500   17420 kubeadm.go:322] 
	I1206 18:01:09.443626   17420 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:01:09.443733   17420 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 18:01:09.443742   17420 kubeadm.go:322] 
	I1206 18:01:09.443808   17420 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 67ds51.lnlc4yei23g2ww4m \
	I1206 18:01:09.443901   17420 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 \
	I1206 18:01:09.443920   17420 kubeadm.go:322] 	--control-plane 
	I1206 18:01:09.443926   17420 kubeadm.go:322] 
	I1206 18:01:09.443992   17420 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:01:09.444001   17420 kubeadm.go:322] 
	I1206 18:01:09.444081   17420 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 67ds51.lnlc4yei23g2ww4m \
	I1206 18:01:09.444181   17420 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 
	I1206 18:01:09.445774   17420 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1206 18:01:09.445890   17420 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:01:09.445918   17420 cni.go:84] Creating CNI manager for ""
	I1206 18:01:09.445926   17420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:01:09.447996   17420 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1206 18:01:09.449296   17420 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 18:01:09.452733   17420 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 18:01:09.452753   17420 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 18:01:09.468483   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 18:01:10.113117   17420 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 18:01:10.113199   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:10.113269   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3 minikube.k8s.io/name=addons-906021 minikube.k8s.io/updated_at=2023_12_06T18_01_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:10.120017   17420 ops.go:34] apiserver oom_adj: -16
	I1206 18:01:10.205576   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:10.268125   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:10.830615   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:11.330653   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:11.830018   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:12.330862   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:12.830390   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:13.330676   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:13.830728   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:14.330401   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:14.830095   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:15.330024   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:15.830855   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:16.330618   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:16.830241   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:17.330920   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:17.830965   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:18.330511   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:18.830972   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:19.330147   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:19.830966   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:20.330815   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:20.830895   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:21.330449   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:21.830607   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:22.330848   17420 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:01:22.396629   17420 kubeadm.go:1088] duration metric: took 12.283489079s to wait for elevateKubeSystemPrivileges.
	I1206 18:01:22.396673   17420 kubeadm.go:406] StartCluster complete in 21.835260818s
	I1206 18:01:22.396696   17420 settings.go:142] acquiring lock: {Name:mk659e0e4749486c04957a41070055ba699e8e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:22.396822   17420 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:01:22.397198   17420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/kubeconfig: {Name:mk369d6bc31165e4100c77201c4dc2786cd89bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:01:22.397394   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 18:01:22.397412   17420 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1206 18:01:22.397524   17420 addons.go:69] Setting volumesnapshots=true in profile "addons-906021"
	I1206 18:01:22.397543   17420 addons.go:69] Setting gcp-auth=true in profile "addons-906021"
	I1206 18:01:22.397548   17420 addons.go:231] Setting addon volumesnapshots=true in "addons-906021"
	I1206 18:01:22.397553   17420 addons.go:69] Setting default-storageclass=true in profile "addons-906021"
	I1206 18:01:22.397563   17420 mustload.go:65] Loading cluster: addons-906021
	I1206 18:01:22.397585   17420 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-906021"
	I1206 18:01:22.397587   17420 addons.go:69] Setting cloud-spanner=true in profile "addons-906021"
	I1206 18:01:22.397587   17420 addons.go:69] Setting helm-tiller=true in profile "addons-906021"
	I1206 18:01:22.397598   17420 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-906021"
	I1206 18:01:22.397616   17420 addons.go:231] Setting addon cloud-spanner=true in "addons-906021"
	I1206 18:01:22.397618   17420 addons.go:231] Setting addon helm-tiller=true in "addons-906021"
	I1206 18:01:22.397623   17420 addons.go:69] Setting registry=true in profile "addons-906021"
	I1206 18:01:22.397621   17420 addons.go:69] Setting ingress-dns=true in profile "addons-906021"
	I1206 18:01:22.397633   17420 addons.go:231] Setting addon registry=true in "addons-906021"
	I1206 18:01:22.397650   17420 addons.go:69] Setting inspektor-gadget=true in profile "addons-906021"
	I1206 18:01:22.397660   17420 addons.go:231] Setting addon inspektor-gadget=true in "addons-906021"
	I1206 18:01:22.397663   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397665   17420 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-906021"
	I1206 18:01:22.397668   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397753   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397762   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397530   17420 addons.go:69] Setting ingress=true in profile "addons-906021"
	I1206 18:01:22.397778   17420 addons.go:231] Setting addon ingress=true in "addons-906021"
	I1206 18:01:22.397809   17420 config.go:182] Loaded profile config "addons-906021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:01:22.397816   17420 addons.go:69] Setting metrics-server=true in profile "addons-906021"
	I1206 18:01:22.397821   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397831   17420 addons.go:231] Setting addon metrics-server=true in "addons-906021"
	I1206 18:01:22.397875   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.398036   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398041   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398179   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398197   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398201   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398211   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398273   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.398332   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.398754   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.397639   17420 addons.go:231] Setting addon ingress-dns=true in "addons-906021"
	I1206 18:01:22.400969   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.401450   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.402335   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.397610   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.402879   17420 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-906021"
	I1206 18:01:22.402907   17420 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-906021"
	I1206 18:01:22.403037   17420 addons.go:69] Setting storage-provisioner=true in profile "addons-906021"
	I1206 18:01:22.403057   17420 addons.go:231] Setting addon storage-provisioner=true in "addons-906021"
	I1206 18:01:22.403091   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.397594   17420 config.go:182] Loaded profile config "addons-906021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:01:22.397609   17420 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-906021"
	I1206 18:01:22.403282   17420 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-906021"
	I1206 18:01:22.403342   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.403796   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.403847   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.407121   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.407273   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.433147   17420 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-906021" context rescaled to 1 replicas
	I1206 18:01:22.433197   17420 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:01:22.436142   17420 out.go:177] * Verifying Kubernetes components...
	I1206 18:01:22.437644   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 18:01:22.437851   17420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:01:22.439017   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 18:01:22.437859   17420 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1206 18:01:22.441870   17420 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1206 18:01:22.443609   17420 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1206 18:01:22.443632   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 18:01:22.443691   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.447637   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 18:01:22.449219   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 18:01:22.445311   17420 addons.go:231] Setting addon default-storageclass=true in "addons-906021"
	I1206 18:01:22.446134   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.452023   17420 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1206 18:01:22.452044   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1206 18:01:22.453833   17420 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1206 18:01:22.455615   17420 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1206 18:01:22.452168   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.452098   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.455494   17420 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1206 18:01:22.463318   17420 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1206 18:01:22.461374   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 18:01:22.461659   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1206 18:01:22.461925   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.466563   17420 out.go:177]   - Using image docker.io/registry:2.8.3
	I1206 18:01:22.465318   17420 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:01:22.465377   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.469401   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 18:01:22.468356   17420 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 18:01:22.470872   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1206 18:01:22.472083   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.472165   17420 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1206 18:01:22.473611   17420 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 18:01:22.473629   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 18:01:22.473689   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.475262   17420 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:01:22.472430   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 18:01:22.477306   17420 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 18:01:22.478472   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 18:01:22.478480   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1206 18:01:22.479994   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 18:01:22.480012   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 18:01:22.480073   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.478540   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.486814   17420 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1206 18:01:22.485648   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.487959   17420 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-906021"
	I1206 18:01:22.488410   17420 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 18:01:22.490443   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1206 18:01:22.490504   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.490648   17420 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:01:22.492186   17420 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:01:22.492203   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 18:01:22.492253   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.490798   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:22.492973   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:22.499519   17420 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 18:01:22.501101   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 18:01:22.501116   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 18:01:22.501161   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.520369   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.526146   17420 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1206 18:01:22.528211   17420 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 18:01:22.528233   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 18:01:22.528293   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.528401   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.530249   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.551502   17420 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 18:01:22.551532   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 18:01:22.551596   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.553078   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.561293   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.564304   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.566607   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.573667   17420 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 18:01:22.574939   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.575177   17420 out.go:177]   - Using image docker.io/busybox:stable
	I1206 18:01:22.576738   17420 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 18:01:22.576758   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 18:01:22.576808   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:22.578093   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.579821   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.583985   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.593632   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:22.608700   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 18:01:22.609656   17420 node_ready.go:35] waiting up to 6m0s for node "addons-906021" to be "Ready" ...
	I1206 18:01:22.802831   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 18:01:22.901477   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 18:01:22.917086   17420 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1206 18:01:22.917120   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1206 18:01:23.003855   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 18:01:23.014470   17420 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 18:01:23.014499   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 18:01:23.018669   17420 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 18:01:23.018699   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 18:01:23.021251   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:01:23.109985   17420 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1206 18:01:23.110029   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1206 18:01:23.119084   17420 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1206 18:01:23.119130   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1206 18:01:23.208512   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 18:01:23.208560   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 18:01:23.210098   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 18:01:23.302349   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 18:01:23.304786   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 18:01:23.305211   17420 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 18:01:23.305247   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 18:01:23.406826   17420 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 18:01:23.406862   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 18:01:23.408761   17420 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1206 18:01:23.408794   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1206 18:01:23.409293   17420 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 18:01:23.409312   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 18:01:23.418521   17420 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1206 18:01:23.418605   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1206 18:01:23.608709   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 18:01:23.608802   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 18:01:23.701253   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 18:01:23.703729   17420 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 18:01:23.703861   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 18:01:23.709173   17420 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1206 18:01:23.709199   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1206 18:01:23.902208   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1206 18:01:23.905119   17420 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 18:01:23.905149   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 18:01:24.101300   17420 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 18:01:24.101334   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 18:01:24.102564   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 18:01:24.102588   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 18:01:24.103945   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 18:01:24.104016   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 18:01:24.105666   17420 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1206 18:01:24.105735   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1206 18:01:24.506934   17420 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1206 18:01:24.506959   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1206 18:01:24.511756   17420 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:01:24.511780   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 18:01:24.611944   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 18:01:24.611984   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 18:01:24.810597   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 18:01:25.001944   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:01:25.010543   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:25.012313   17420 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.403577465s)
	I1206 18:01:25.012432   17420 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 18:01:25.111871   17420 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 18:01:25.111906   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1206 18:01:25.112205   17420 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 18:01:25.112230   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 18:01:25.423636   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 18:01:25.423663   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 18:01:25.608746   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1206 18:01:25.619539   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 18:01:25.619574   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 18:01:25.905840   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 18:01:25.905870   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 18:01:26.122229   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 18:01:26.122257   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 18:01:26.304056   17420 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 18:01:26.304088   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 18:01:26.524763   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 18:01:26.614749   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.811877615s)
	I1206 18:01:26.614957   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.713450468s)
	I1206 18:01:27.414097   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:28.835408   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.831509642s)
	I1206 18:01:28.835467   17420 addons.go:467] Verifying addon ingress=true in "addons-906021"
	I1206 18:01:28.837373   17420 out.go:177] * Verifying ingress addon...
	I1206 18:01:28.835563   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.814284098s)
	I1206 18:01:28.835603   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.625470601s)
	I1206 18:01:28.835622   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.533245685s)
	I1206 18:01:28.835689   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.530873931s)
	I1206 18:01:28.835716   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.134373524s)
	I1206 18:01:28.835746   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.933512991s)
	I1206 18:01:28.835852   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.025164762s)
	I1206 18:01:28.835938   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.833908806s)
	I1206 18:01:28.836007   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.227223502s)
	I1206 18:01:28.838815   17420 addons.go:467] Verifying addon registry=true in "addons-906021"
	W1206 18:01:28.838846   17420 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 18:01:28.838871   17420 retry.go:31] will retry after 309.329059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 18:01:28.840445   17420 out.go:177] * Verifying registry addon...
	I1206 18:01:28.838819   17420 addons.go:467] Verifying addon metrics-server=true in "addons-906021"
	I1206 18:01:28.839508   17420 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 18:01:28.842785   17420 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 18:01:28.905495   17420 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 18:01:28.905520   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:28.905713   17420 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 18:01:28.905731   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 18:01:28.909311   17420 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 18:01:28.910328   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:28.910490   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:29.148885   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 18:01:29.310707   17420 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 18:01:29.310782   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:29.334228   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:29.413993   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:29.414157   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:29.519331   17420 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 18:01:29.536990   17420 addons.go:231] Setting addon gcp-auth=true in "addons-906021"
	I1206 18:01:29.537046   17420 host.go:66] Checking if "addons-906021" exists ...
	I1206 18:01:29.537441   17420 cli_runner.go:164] Run: docker container inspect addons-906021 --format={{.State.Status}}
	I1206 18:01:29.555436   17420 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 18:01:29.555490   17420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-906021
	I1206 18:01:29.570476   17420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/addons-906021/id_rsa Username:docker}
	I1206 18:01:29.824549   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.299670274s)
	I1206 18:01:29.824593   17420 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-906021"
	I1206 18:01:29.826437   17420 out.go:177] * Verifying csi-hostpath-driver addon...
	I1206 18:01:29.828500   17420 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 18:01:29.832035   17420 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 18:01:29.832061   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:29.901605   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:29.913897   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:29.914537   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:29.914875   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:30.406346   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:30.424162   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:30.425656   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:30.906560   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:30.922733   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:30.923739   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:31.408047   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:31.416134   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:31.416738   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:31.620289   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.471332504s)
	I1206 18:01:31.620373   17420 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.064902998s)
	I1206 18:01:31.622483   17420 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1206 18:01:31.625445   17420 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1206 18:01:31.627017   17420 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 18:01:31.627040   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 18:01:31.711162   17420 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 18:01:31.711189   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 18:01:31.730150   17420 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 18:01:31.730185   17420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1206 18:01:31.817016   17420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 18:01:31.906439   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:31.914773   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:31.915177   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:31.915392   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:32.407042   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:32.414680   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:32.417949   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:32.906913   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:32.914842   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:32.915224   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:33.216536   17420 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.399479192s)
	I1206 18:01:33.217485   17420 addons.go:467] Verifying addon gcp-auth=true in "addons-906021"
	I1206 18:01:33.219178   17420 out.go:177] * Verifying gcp-auth addon...
	I1206 18:01:33.221557   17420 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 18:01:33.224343   17420 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 18:01:33.224364   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:33.228445   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:33.407223   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:33.415093   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:33.415203   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:33.732905   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:33.906484   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:33.914482   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:33.914998   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:34.232512   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:34.405880   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:34.413281   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:34.413720   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:34.413909   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:34.731560   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:34.906567   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:34.914564   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:34.914871   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:35.231807   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:35.406995   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:35.414029   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:35.414278   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:35.732642   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:35.906347   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:35.914621   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:35.914729   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:36.232003   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:36.405463   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:36.414000   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:36.414235   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:36.414462   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:36.731498   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:36.906539   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:36.914449   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:36.914521   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:37.233192   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:37.405759   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:37.413796   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:37.413994   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:37.731886   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:37.905693   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:37.913587   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:37.913745   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:38.231352   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:38.405847   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:38.413993   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:38.414024   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:38.732014   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:38.905935   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:38.913598   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:38.914038   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:38.914614   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:39.231706   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:39.406347   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:39.414351   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:39.414351   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:39.732511   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:39.906334   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:39.915519   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:39.916051   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:40.232144   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:40.405724   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:40.413453   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:40.413690   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:40.732182   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:40.905728   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:40.913658   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:40.913998   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:41.231693   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:41.406176   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:41.413377   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:41.414186   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:41.414449   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:41.731176   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:41.905770   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:41.913567   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:41.913889   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:42.231576   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:42.406343   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:42.414279   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:42.414568   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:42.732376   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:42.905916   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:42.913896   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:42.914094   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:43.231902   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:43.405414   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:43.414781   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:43.415117   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:43.415306   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:43.732045   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:43.907038   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:43.914039   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:43.914246   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:44.232002   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:44.405540   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:44.414625   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:44.414782   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:44.732024   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:44.905519   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:44.914363   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:44.914492   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:45.232189   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:45.405778   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:45.414412   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:45.414705   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:45.731506   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:45.905967   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:45.913451   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:45.914045   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:45.914228   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:46.232458   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:46.405994   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:46.413988   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:46.414329   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:46.732063   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:46.905746   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:46.913627   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:46.913890   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:47.231456   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:47.405772   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:47.413750   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:47.413894   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:47.731562   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:47.906348   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:47.913755   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:47.914276   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:47.914447   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:48.232310   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:48.405855   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:48.413699   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:48.414002   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:48.731690   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:48.905985   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:48.913832   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:48.913838   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:49.231943   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:49.405314   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:49.414063   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:49.414161   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:49.732050   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:49.905546   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:49.913556   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:49.914142   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:50.232163   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:50.405719   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:50.413261   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:50.413633   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:50.413793   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:50.731466   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:50.906117   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:50.913940   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:50.914048   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:51.232029   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:51.405659   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:51.413452   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:51.413558   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:51.732232   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:51.905779   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:51.913777   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:51.913869   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:52.235614   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:52.407853   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:52.413644   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:52.413723   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:52.731589   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:52.906079   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:52.913193   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:52.913888   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:52.913910   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:53.231617   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:53.406132   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:53.413806   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:53.414010   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:53.731652   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:53.906498   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:53.913819   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:53.913926   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:54.231724   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:54.406361   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:54.414223   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:54.414447   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:54.732239   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:54.905590   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:54.913301   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:54.913482   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:55.232205   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:55.406013   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:55.413107   17420 node_ready.go:58] node "addons-906021" has status "Ready":"False"
	I1206 18:01:55.413944   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:55.413944   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:55.731795   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:55.906225   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:55.914170   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:55.914358   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:56.232349   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:56.405870   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:56.413629   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:56.413867   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:56.731566   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:56.906096   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:56.913981   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:56.914164   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:57.231566   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:57.407644   17420 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 18:01:57.407671   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:57.413273   17420 node_ready.go:49] node "addons-906021" has status "Ready":"True"
	I1206 18:01:57.413356   17420 node_ready.go:38] duration metric: took 34.8036283s waiting for node "addons-906021" to be "Ready" ...
	I1206 18:01:57.413376   17420 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:01:57.414555   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:57.414985   17420 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 18:01:57.415060   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:57.424607   17420 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbtqj" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:57.731476   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:57.908582   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:57.915052   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:57.915173   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:58.231855   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:58.408192   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:58.415274   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:58.415327   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:58.732379   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:58.907118   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:58.914779   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:58.914965   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:58.941368   17420 pod_ready.go:92] pod "coredns-5dd5756b68-gbtqj" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:58.941394   17420 pod_ready.go:81] duration metric: took 1.516757887s waiting for pod "coredns-5dd5756b68-gbtqj" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:58.941420   17420 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:58.946228   17420 pod_ready.go:92] pod "etcd-addons-906021" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:58.946252   17420 pod_ready.go:81] duration metric: took 4.824082ms waiting for pod "etcd-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:58.946267   17420 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:58.950652   17420 pod_ready.go:92] pod "kube-apiserver-addons-906021" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:58.950670   17420 pod_ready.go:81] duration metric: took 4.396789ms waiting for pod "kube-apiserver-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:58.950678   17420 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.013994   17420 pod_ready.go:92] pod "kube-controller-manager-addons-906021" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:59.014028   17420 pod_ready.go:81] duration metric: took 63.341788ms waiting for pod "kube-controller-manager-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.014043   17420 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t2vs7" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.232787   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:59.407260   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:59.413302   17420 pod_ready.go:92] pod "kube-proxy-t2vs7" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:59.413331   17420 pod_ready.go:81] duration metric: took 399.278602ms waiting for pod "kube-proxy-t2vs7" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.413344   17420 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.414404   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:59.414585   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:01:59.732090   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:01:59.813512   17420 pod_ready.go:92] pod "kube-scheduler-addons-906021" in "kube-system" namespace has status "Ready":"True"
	I1206 18:01:59.813536   17420 pod_ready.go:81] duration metric: took 400.183505ms waiting for pod "kube-scheduler-addons-906021" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.813545   17420 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace to be "Ready" ...
	I1206 18:01:59.908001   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:01:59.915734   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:01:59.916165   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:00.232604   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:00.408001   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:00.417252   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:00.417941   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:00.732730   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:00.908121   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:00.915271   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:00.915778   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:01.232166   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:01.407678   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:01.414459   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:01.414985   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:01.732842   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:01.906640   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:01.914460   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:01.914588   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:02.123304   17420 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:02.232944   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:02.407631   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:02.413967   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:02.414296   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:02.733099   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:02.906837   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:02.914325   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:02.914442   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:03.231597   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:03.408446   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:03.415689   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:03.415760   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:03.734495   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:03.915690   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:03.916336   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:03.920056   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:04.233390   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:04.407220   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:04.415474   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:04.415505   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:04.622931   17420 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:04.731809   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:04.907663   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:04.914377   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:04.914521   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:05.232557   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:05.408553   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:05.415564   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:05.416315   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:05.732082   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:05.906961   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:05.914467   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:05.914512   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:06.232771   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:06.406177   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:06.415016   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:06.415369   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:06.622977   17420 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:06.732379   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:06.909009   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:06.917029   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:06.918122   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:07.231969   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:07.407269   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:07.417552   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:07.418332   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:07.622929   17420 pod_ready.go:92] pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace has status "Ready":"True"
	I1206 18:02:07.622957   17420 pod_ready.go:81] duration metric: took 7.809404829s waiting for pod "metrics-server-7c66d45ddc-dvqrm" in "kube-system" namespace to be "Ready" ...
	I1206 18:02:07.622970   17420 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace to be "Ready" ...
	I1206 18:02:07.732585   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:07.908141   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:07.914841   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:07.915060   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:08.233106   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:08.407158   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:08.415449   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:08.415534   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:08.732202   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:08.907121   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:08.914502   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:08.915306   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:09.232606   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:09.408111   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:09.414520   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:09.415048   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:09.638730   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:09.733339   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:09.908146   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:09.915417   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:09.916557   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:10.232617   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:10.406725   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:10.414497   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:10.415424   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:10.732421   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:10.908092   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:10.915848   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:10.916485   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:11.232375   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:11.408193   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:11.414931   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:11.415150   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:11.733431   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:11.906596   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:11.914757   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:11.914774   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:12.137231   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:12.232613   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:12.408247   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:12.415809   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:12.416319   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:12.731964   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:12.907845   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:12.916519   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:12.916704   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:13.232139   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:13.410039   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:13.422442   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:13.423207   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:13.731725   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:13.908436   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:13.916177   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:13.916694   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:14.137983   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:14.231974   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:14.407048   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:14.414610   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:14.414797   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:14.732667   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:14.910606   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:14.916440   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:14.917171   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:15.232517   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:15.407753   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:15.414697   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:15.415092   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:15.732480   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:15.907005   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:15.914829   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:15.915089   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:16.232075   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:16.406487   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:16.414601   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:16.415086   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:16.637323   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:16.732878   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:16.908442   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:16.915054   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:16.915439   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:17.232745   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:17.407681   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:17.414356   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:17.414817   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:17.732080   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:17.906601   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:17.914866   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:17.915336   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:18.232538   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:18.407168   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:18.415309   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:18.415328   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:18.637391   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:18.732488   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:18.907885   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:18.914764   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:18.914880   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:19.231954   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:19.407080   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:19.415585   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:19.416442   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 18:02:19.731834   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:19.907472   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:19.915090   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:19.915500   17420 kapi.go:107] duration metric: took 51.072714524s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 18:02:20.232124   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:20.406840   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:20.414377   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:20.731394   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:20.907347   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:20.914694   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:21.137852   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:21.232046   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:21.407617   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:21.415298   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:21.731750   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:21.907217   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:21.914241   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:22.231943   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:22.406766   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:22.414201   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:22.731845   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:22.906874   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:22.914269   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:23.231727   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:23.407091   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:23.414399   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:23.637256   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:23.732070   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:23.906209   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:23.914184   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:24.233149   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:24.408872   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:24.415291   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:24.803966   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:24.909679   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:24.915039   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:25.233722   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:25.408754   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:25.414645   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:25.638095   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:25.732104   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:25.907635   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:25.914089   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:26.232399   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:26.408491   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:26.415575   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:26.732621   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:26.913566   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:26.914812   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:27.232342   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:27.407652   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:27.414645   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:27.731694   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:27.907357   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:27.914216   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:28.137112   17420 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"False"
	I1206 18:02:28.231906   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:28.406716   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:28.414754   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:28.733182   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:28.907951   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:28.915269   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:29.231471   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:29.406945   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:29.413857   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:29.732300   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:29.907122   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:29.915809   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:30.138290   17420 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace has status "Ready":"True"
	I1206 18:02:30.138312   17420 pod_ready.go:81] duration metric: took 22.51533483s waiting for pod "nvidia-device-plugin-daemonset-mfv8h" in "kube-system" namespace to be "Ready" ...
	I1206 18:02:30.138331   17420 pod_ready.go:38] duration metric: took 32.724941618s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:02:30.138347   17420 api_server.go:52] waiting for apiserver process to appear ...
	I1206 18:02:30.138372   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 18:02:30.138414   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 18:02:30.174949   17420 cri.go:89] found id: "b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:30.174975   17420 cri.go:89] found id: ""
	I1206 18:02:30.174986   17420 logs.go:284] 1 containers: [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f]
	I1206 18:02:30.175031   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.178461   17420 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 18:02:30.178545   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 18:02:30.231175   17420 cri.go:89] found id: "79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:30.231202   17420 cri.go:89] found id: ""
	I1206 18:02:30.231215   17420 logs.go:284] 1 containers: [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14]
	I1206 18:02:30.231265   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.231785   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:30.234708   17420 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 18:02:30.234771   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 18:02:30.266673   17420 cri.go:89] found id: "2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:30.266697   17420 cri.go:89] found id: ""
	I1206 18:02:30.266707   17420 logs.go:284] 1 containers: [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a]
	I1206 18:02:30.266760   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.269854   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 18:02:30.269922   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 18:02:30.301868   17420 cri.go:89] found id: "249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:30.301892   17420 cri.go:89] found id: ""
	I1206 18:02:30.301899   17420 logs.go:284] 1 containers: [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d]
	I1206 18:02:30.301948   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.305269   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 18:02:30.305328   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 18:02:30.337854   17420 cri.go:89] found id: "1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:30.337877   17420 cri.go:89] found id: ""
	I1206 18:02:30.337885   17420 logs.go:284] 1 containers: [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a]
	I1206 18:02:30.337937   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.341147   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 18:02:30.341214   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 18:02:30.373759   17420 cri.go:89] found id: "2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:30.373803   17420 cri.go:89] found id: ""
	I1206 18:02:30.373816   17420 logs.go:284] 1 containers: [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e]
	I1206 18:02:30.373868   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.377335   17420 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 18:02:30.377397   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 18:02:30.407989   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:30.410052   17420 cri.go:89] found id: "30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:30.410080   17420 cri.go:89] found id: ""
	I1206 18:02:30.410089   17420 logs.go:284] 1 containers: [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc]
	I1206 18:02:30.410127   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:30.414033   17420 logs.go:123] Gathering logs for kubelet ...
	I1206 18:02:30.414055   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 18:02:30.414306   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:30.495079   17420 logs.go:123] Gathering logs for kube-apiserver [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f] ...
	I1206 18:02:30.495124   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:30.542614   17420 logs.go:123] Gathering logs for etcd [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14] ...
	I1206 18:02:30.542662   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:30.589129   17420 logs.go:123] Gathering logs for coredns [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a] ...
	I1206 18:02:30.589161   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:30.622400   17420 logs.go:123] Gathering logs for kube-proxy [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a] ...
	I1206 18:02:30.622429   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:30.654326   17420 logs.go:123] Gathering logs for kindnet [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc] ...
	I1206 18:02:30.654353   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:30.687668   17420 logs.go:123] Gathering logs for dmesg ...
	I1206 18:02:30.687697   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 18:02:30.698757   17420 logs.go:123] Gathering logs for describe nodes ...
	I1206 18:02:30.698786   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 18:02:30.731890   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:30.802851   17420 logs.go:123] Gathering logs for kube-scheduler [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d] ...
	I1206 18:02:30.802887   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:30.847705   17420 logs.go:123] Gathering logs for kube-controller-manager [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e] ...
	I1206 18:02:30.847742   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:30.902669   17420 logs.go:123] Gathering logs for CRI-O ...
	I1206 18:02:30.902698   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 18:02:30.907049   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:30.914603   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:30.993830   17420 logs.go:123] Gathering logs for container status ...
	I1206 18:02:30.993868   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 18:02:31.232187   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:31.407075   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:31.415358   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:31.732682   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 18:02:31.907846   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:31.914104   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:32.232392   17420 kapi.go:107] duration metric: took 59.010830883s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 18:02:32.234339   17420 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-906021 cluster.
	I1206 18:02:32.235930   17420 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 18:02:32.237470   17420 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 18:02:32.409551   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:32.502665   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:32.908327   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:32.914867   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:33.408448   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:33.414887   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:33.536240   17420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:02:33.617040   17420 api_server.go:72] duration metric: took 1m11.183806334s to wait for apiserver process to appear ...
	I1206 18:02:33.617068   17420 api_server.go:88] waiting for apiserver healthz status ...
	I1206 18:02:33.617114   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 18:02:33.617173   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 18:02:33.730768   17420 cri.go:89] found id: "b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:33.730796   17420 cri.go:89] found id: ""
	I1206 18:02:33.730809   17420 logs.go:284] 1 containers: [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f]
	I1206 18:02:33.730859   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:33.734573   17420 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 18:02:33.734644   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 18:02:33.836000   17420 cri.go:89] found id: "79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:33.836026   17420 cri.go:89] found id: ""
	I1206 18:02:33.836037   17420 logs.go:284] 1 containers: [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14]
	I1206 18:02:33.836084   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:33.840112   17420 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 18:02:33.840182   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 18:02:33.907881   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:33.914164   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:34.029101   17420 cri.go:89] found id: "2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:34.029203   17420 cri.go:89] found id: ""
	I1206 18:02:34.029225   17420 logs.go:284] 1 containers: [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a]
	I1206 18:02:34.029302   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:34.034074   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 18:02:34.034144   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 18:02:34.209261   17420 cri.go:89] found id: "249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:34.209291   17420 cri.go:89] found id: ""
	I1206 18:02:34.209301   17420 logs.go:284] 1 containers: [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d]
	I1206 18:02:34.209355   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:34.213262   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 18:02:34.213324   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 18:02:34.306882   17420 cri.go:89] found id: "1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:34.306909   17420 cri.go:89] found id: ""
	I1206 18:02:34.306920   17420 logs.go:284] 1 containers: [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a]
	I1206 18:02:34.306978   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:34.310779   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 18:02:34.310847   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 18:02:34.407251   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:34.409930   17420 cri.go:89] found id: "2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:34.409951   17420 cri.go:89] found id: ""
	I1206 18:02:34.409961   17420 logs.go:284] 1 containers: [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e]
	I1206 18:02:34.410017   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:34.415266   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:34.415323   17420 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 18:02:34.415377   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 18:02:34.508336   17420 cri.go:89] found id: "30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:34.508365   17420 cri.go:89] found id: ""
	I1206 18:02:34.508375   17420 logs.go:284] 1 containers: [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc]
	I1206 18:02:34.508435   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:34.511742   17420 logs.go:123] Gathering logs for kubelet ...
	I1206 18:02:34.511768   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 18:02:34.593358   17420 logs.go:123] Gathering logs for dmesg ...
	I1206 18:02:34.593406   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 18:02:34.609581   17420 logs.go:123] Gathering logs for describe nodes ...
	I1206 18:02:34.609614   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 18:02:34.812541   17420 logs.go:123] Gathering logs for kube-scheduler [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d] ...
	I1206 18:02:34.812584   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:34.853678   17420 logs.go:123] Gathering logs for kube-proxy [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a] ...
	I1206 18:02:34.853718   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:34.908398   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:34.915573   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:34.918108   17420 logs.go:123] Gathering logs for kindnet [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc] ...
	I1206 18:02:34.918139   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:34.954586   17420 logs.go:123] Gathering logs for kube-apiserver [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f] ...
	I1206 18:02:34.954631   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:35.050997   17420 logs.go:123] Gathering logs for etcd [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14] ...
	I1206 18:02:35.051030   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:35.155865   17420 logs.go:123] Gathering logs for coredns [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a] ...
	I1206 18:02:35.155898   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:35.238756   17420 logs.go:123] Gathering logs for kube-controller-manager [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e] ...
	I1206 18:02:35.238790   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:35.331613   17420 logs.go:123] Gathering logs for CRI-O ...
	I1206 18:02:35.331648   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 18:02:35.406662   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:35.410036   17420 logs.go:123] Gathering logs for container status ...
	I1206 18:02:35.410062   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 18:02:35.414446   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:35.907932   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:35.915007   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:36.406737   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:36.415458   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:36.906751   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:36.914670   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:37.407021   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:37.414681   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:37.920673   17420 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 18:02:37.921795   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:37.957818   17420 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 18:02:37.962050   17420 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 18:02:37.963104   17420 api_server.go:141] control plane version: v1.28.4
	I1206 18:02:37.963125   17420 api_server.go:131] duration metric: took 4.346050846s to wait for apiserver health ...
	I1206 18:02:37.963132   17420 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 18:02:37.963150   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 18:02:37.963193   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 18:02:37.997401   17420 cri.go:89] found id: "b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:37.997427   17420 cri.go:89] found id: ""
	I1206 18:02:37.997436   17420 logs.go:284] 1 containers: [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f]
	I1206 18:02:37.997494   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.000720   17420 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 18:02:38.000780   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 18:02:38.034292   17420 cri.go:89] found id: "79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:38.034314   17420 cri.go:89] found id: ""
	I1206 18:02:38.034323   17420 logs.go:284] 1 containers: [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14]
	I1206 18:02:38.034375   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.037572   17420 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 18:02:38.037641   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 18:02:38.075927   17420 cri.go:89] found id: "2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:38.075949   17420 cri.go:89] found id: ""
	I1206 18:02:38.075956   17420 logs.go:284] 1 containers: [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a]
	I1206 18:02:38.075998   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.079816   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 18:02:38.079907   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 18:02:38.215642   17420 cri.go:89] found id: "249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:38.215669   17420 cri.go:89] found id: ""
	I1206 18:02:38.215679   17420 logs.go:284] 1 containers: [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d]
	I1206 18:02:38.215741   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.219694   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 18:02:38.219792   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 18:02:38.408427   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:38.411780   17420 cri.go:89] found id: "1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:38.411802   17420 cri.go:89] found id: ""
	I1206 18:02:38.411811   17420 logs.go:284] 1 containers: [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a]
	I1206 18:02:38.411857   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.415450   17420 kapi.go:107] duration metric: took 1m9.57593703s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 18:02:38.416089   17420 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 18:02:38.416143   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 18:02:38.601594   17420 cri.go:89] found id: "2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:38.601623   17420 cri.go:89] found id: ""
	I1206 18:02:38.601634   17420 logs.go:284] 1 containers: [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e]
	I1206 18:02:38.601692   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.605564   17420 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 18:02:38.605632   17420 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 18:02:38.647416   17420 cri.go:89] found id: "30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:38.647445   17420 cri.go:89] found id: ""
	I1206 18:02:38.647457   17420 logs.go:284] 1 containers: [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc]
	I1206 18:02:38.647514   17420 ssh_runner.go:195] Run: which crictl
	I1206 18:02:38.650857   17420 logs.go:123] Gathering logs for kube-scheduler [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d] ...
	I1206 18:02:38.650886   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d"
	I1206 18:02:38.745567   17420 logs.go:123] Gathering logs for kindnet [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc] ...
	I1206 18:02:38.745608   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc"
	I1206 18:02:38.824036   17420 logs.go:123] Gathering logs for etcd [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14] ...
	I1206 18:02:38.824064   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14"
	I1206 18:02:38.907387   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:38.932724   17420 logs.go:123] Gathering logs for dmesg ...
	I1206 18:02:38.932771   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 18:02:39.006268   17420 logs.go:123] Gathering logs for describe nodes ...
	I1206 18:02:39.006350   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1206 18:02:39.245986   17420 logs.go:123] Gathering logs for kube-apiserver [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f] ...
	I1206 18:02:39.246027   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f"
	I1206 18:02:39.340085   17420 logs.go:123] Gathering logs for coredns [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a] ...
	I1206 18:02:39.340135   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a"
	I1206 18:02:39.409595   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:39.430865   17420 logs.go:123] Gathering logs for kube-proxy [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a] ...
	I1206 18:02:39.430905   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a"
	I1206 18:02:39.504441   17420 logs.go:123] Gathering logs for kube-controller-manager [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e] ...
	I1206 18:02:39.504481   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e"
	I1206 18:02:39.565420   17420 logs.go:123] Gathering logs for CRI-O ...
	I1206 18:02:39.565456   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 18:02:39.638139   17420 logs.go:123] Gathering logs for kubelet ...
	I1206 18:02:39.638178   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 18:02:39.711591   17420 logs.go:123] Gathering logs for container status ...
	I1206 18:02:39.711636   17420 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 18:02:39.907225   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:40.406717   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:40.906596   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:41.407691   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:41.906633   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:42.262521   17420 system_pods.go:59] 19 kube-system pods found
	I1206 18:02:42.262588   17420 system_pods.go:61] "coredns-5dd5756b68-gbtqj" [54d1a8d2-c55a-4ddc-a1fb-e6fbacd213d3] Running
	I1206 18:02:42.262599   17420 system_pods.go:61] "csi-hostpath-attacher-0" [85ffe067-a64a-4b4d-94d7-809dcb4593d2] Running
	I1206 18:02:42.262607   17420 system_pods.go:61] "csi-hostpath-resizer-0" [bdd65826-98a1-4807-9bc8-131be57c19f4] Running
	I1206 18:02:42.262634   17420 system_pods.go:61] "csi-hostpathplugin-szpn2" [c4493a7b-32a7-4dd1-9fd1-a7fa6bdaf89e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 18:02:42.262648   17420 system_pods.go:61] "etcd-addons-906021" [a42d1c4e-bffa-435f-b7b1-5e484f339b20] Running
	I1206 18:02:42.262657   17420 system_pods.go:61] "kindnet-j9vqn" [62b33571-2bc7-4dd2-a656-9b6c991bdd43] Running
	I1206 18:02:42.262664   17420 system_pods.go:61] "kube-apiserver-addons-906021" [bc3a093e-ccb1-4f7c-9f34-e898a104024b] Running
	I1206 18:02:42.262676   17420 system_pods.go:61] "kube-controller-manager-addons-906021" [2705a990-e02b-4250-a1df-a393186d569d] Running
	I1206 18:02:42.262686   17420 system_pods.go:61] "kube-ingress-dns-minikube" [3ee35abf-c990-4bcd-976c-1df07596953e] Running
	I1206 18:02:42.262697   17420 system_pods.go:61] "kube-proxy-t2vs7" [009ff31f-8566-4aeb-a011-59032341e304] Running
	I1206 18:02:42.262704   17420 system_pods.go:61] "kube-scheduler-addons-906021" [0b3a65d5-892a-4d29-ade6-cd774f4526e8] Running
	I1206 18:02:42.262711   17420 system_pods.go:61] "metrics-server-7c66d45ddc-dvqrm" [009f5378-9bf7-4107-ba9e-30c7fa55e4ff] Running
	I1206 18:02:42.262719   17420 system_pods.go:61] "nvidia-device-plugin-daemonset-mfv8h" [e1933ed1-4726-4a86-86e6-0753ce7d0f72] Running
	I1206 18:02:42.262726   17420 system_pods.go:61] "registry-proxy-6qg5h" [66569529-08b4-49b4-b8d3-adc07070b1c8] Running
	I1206 18:02:42.262734   17420 system_pods.go:61] "registry-xw24r" [82901825-2736-48f6-872f-0b11f797e48d] Running
	I1206 18:02:42.262742   17420 system_pods.go:61] "snapshot-controller-58dbcc7b99-bwc4z" [911344e1-48f6-400a-a17e-76295c2d0d79] Running
	I1206 18:02:42.262752   17420 system_pods.go:61] "snapshot-controller-58dbcc7b99-n78d7" [a3898ddc-6e21-4a2c-9edd-6864ab50a0df] Running
	I1206 18:02:42.262759   17420 system_pods.go:61] "storage-provisioner" [c005f125-bb07-42c6-a012-ea2eb61e1793] Running
	I1206 18:02:42.262770   17420 system_pods.go:61] "tiller-deploy-7b677967b9-tmmzw" [9f43c435-3e04-42b6-9440-5f692aa79d97] Running
	I1206 18:02:42.262784   17420 system_pods.go:74] duration metric: took 4.299643526s to wait for pod list to return data ...
	I1206 18:02:42.262796   17420 default_sa.go:34] waiting for default service account to be created ...
	I1206 18:02:42.265177   17420 default_sa.go:45] found service account: "default"
	I1206 18:02:42.265203   17420 default_sa.go:55] duration metric: took 2.399169ms for default service account to be created ...
	I1206 18:02:42.265211   17420 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 18:02:42.273059   17420 system_pods.go:86] 19 kube-system pods found
	I1206 18:02:42.273086   17420 system_pods.go:89] "coredns-5dd5756b68-gbtqj" [54d1a8d2-c55a-4ddc-a1fb-e6fbacd213d3] Running
	I1206 18:02:42.273091   17420 system_pods.go:89] "csi-hostpath-attacher-0" [85ffe067-a64a-4b4d-94d7-809dcb4593d2] Running
	I1206 18:02:42.273096   17420 system_pods.go:89] "csi-hostpath-resizer-0" [bdd65826-98a1-4807-9bc8-131be57c19f4] Running
	I1206 18:02:42.273103   17420 system_pods.go:89] "csi-hostpathplugin-szpn2" [c4493a7b-32a7-4dd1-9fd1-a7fa6bdaf89e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 18:02:42.273108   17420 system_pods.go:89] "etcd-addons-906021" [a42d1c4e-bffa-435f-b7b1-5e484f339b20] Running
	I1206 18:02:42.273114   17420 system_pods.go:89] "kindnet-j9vqn" [62b33571-2bc7-4dd2-a656-9b6c991bdd43] Running
	I1206 18:02:42.273119   17420 system_pods.go:89] "kube-apiserver-addons-906021" [bc3a093e-ccb1-4f7c-9f34-e898a104024b] Running
	I1206 18:02:42.273128   17420 system_pods.go:89] "kube-controller-manager-addons-906021" [2705a990-e02b-4250-a1df-a393186d569d] Running
	I1206 18:02:42.273136   17420 system_pods.go:89] "kube-ingress-dns-minikube" [3ee35abf-c990-4bcd-976c-1df07596953e] Running
	I1206 18:02:42.273148   17420 system_pods.go:89] "kube-proxy-t2vs7" [009ff31f-8566-4aeb-a011-59032341e304] Running
	I1206 18:02:42.273159   17420 system_pods.go:89] "kube-scheduler-addons-906021" [0b3a65d5-892a-4d29-ade6-cd774f4526e8] Running
	I1206 18:02:42.273165   17420 system_pods.go:89] "metrics-server-7c66d45ddc-dvqrm" [009f5378-9bf7-4107-ba9e-30c7fa55e4ff] Running
	I1206 18:02:42.273179   17420 system_pods.go:89] "nvidia-device-plugin-daemonset-mfv8h" [e1933ed1-4726-4a86-86e6-0753ce7d0f72] Running
	I1206 18:02:42.273183   17420 system_pods.go:89] "registry-proxy-6qg5h" [66569529-08b4-49b4-b8d3-adc07070b1c8] Running
	I1206 18:02:42.273187   17420 system_pods.go:89] "registry-xw24r" [82901825-2736-48f6-872f-0b11f797e48d] Running
	I1206 18:02:42.273191   17420 system_pods.go:89] "snapshot-controller-58dbcc7b99-bwc4z" [911344e1-48f6-400a-a17e-76295c2d0d79] Running
	I1206 18:02:42.273195   17420 system_pods.go:89] "snapshot-controller-58dbcc7b99-n78d7" [a3898ddc-6e21-4a2c-9edd-6864ab50a0df] Running
	I1206 18:02:42.273199   17420 system_pods.go:89] "storage-provisioner" [c005f125-bb07-42c6-a012-ea2eb61e1793] Running
	I1206 18:02:42.273203   17420 system_pods.go:89] "tiller-deploy-7b677967b9-tmmzw" [9f43c435-3e04-42b6-9440-5f692aa79d97] Running
	I1206 18:02:42.273210   17420 system_pods.go:126] duration metric: took 7.993762ms to wait for k8s-apps to be running ...
	I1206 18:02:42.273218   17420 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:02:42.273271   17420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:02:42.285166   17420 system_svc.go:56] duration metric: took 11.937575ms WaitForService to wait for kubelet.
	I1206 18:02:42.285194   17420 kubeadm.go:581] duration metric: took 1m19.851967223s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:02:42.285220   17420 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:02:42.288166   17420 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 18:02:42.288193   17420 node_conditions.go:123] node cpu capacity is 8
	I1206 18:02:42.288204   17420 node_conditions.go:105] duration metric: took 2.979749ms to run NodePressure ...
	I1206 18:02:42.288216   17420 start.go:228] waiting for startup goroutines ...
	I1206 18:02:42.407464   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:42.908070   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:43.406468   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:43.906398   17420 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 18:02:44.407608   17420 kapi.go:107] duration metric: took 1m14.57910679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 18:02:44.409971   17420 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, inspektor-gadget, helm-tiller, ingress-dns, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1206 18:02:44.411878   17420 addons.go:502] enable addons completed in 1m22.014470482s: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner inspektor-gadget helm-tiller ingress-dns metrics-server default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1206 18:02:44.411924   17420 start.go:233] waiting for cluster config update ...
	I1206 18:02:44.411944   17420 start.go:242] writing updated cluster config ...
	I1206 18:02:44.412238   17420 ssh_runner.go:195] Run: rm -f paused
	I1206 18:02:44.461216   17420 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 18:02:44.463417   17420 out.go:177] * Done! kubectl is now configured to use "addons-906021" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.427572982Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=5f54a0bf-6ca7-49c1-912d-d4588851871e name=/runtime.v1.ImageService/PullImage
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.428426144Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=a45b7ead-c228-4738-9ada-260ab6f4ae07 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.429428868Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=a45b7ead-c228-4738-9ada-260ab6f4ae07 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.430213269Z" level=info msg="Creating container: default/hello-world-app-5d77478584-djpgf/hello-world-app" id=5b349768-12aa-40af-a34c-d24f9c0ad254 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.430296775Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.506584253Z" level=info msg="Created container f7fa1f244866db5a91fe7ad390c59edd777228f90af5e3d8c9567fd720efcb3a: default/hello-world-app-5d77478584-djpgf/hello-world-app" id=5b349768-12aa-40af-a34c-d24f9c0ad254 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.507231844Z" level=info msg="Starting container: f7fa1f244866db5a91fe7ad390c59edd777228f90af5e3d8c9567fd720efcb3a" id=0fd1a199-2309-470b-9a13-ebc8b1c16bfc name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.516560623Z" level=info msg="Started container" PID=10786 containerID=f7fa1f244866db5a91fe7ad390c59edd777228f90af5e3d8c9567fd720efcb3a description=default/hello-world-app-5d77478584-djpgf/hello-world-app id=0fd1a199-2309-470b-9a13-ebc8b1c16bfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=3615bb033bc7950921296ff01f2ac6693e9dca24e57928f18575e68f9d6cc1d5
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.560342366Z" level=info msg="Removing container: 3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8" id=1be83000-f1d1-4058-9c4d-e666e4c5c711 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 18:05:11 addons-906021 crio[950]: time="2023-12-06 18:05:11.576577799Z" level=info msg="Removed container 3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=1be83000-f1d1-4058-9c4d-e666e4c5c711 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 18:05:13 addons-906021 crio[950]: time="2023-12-06 18:05:13.101409273Z" level=info msg="Stopping container: eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56 (timeout: 2s)" id=256916df-d760-4350-9e08-dfc3a2296c61 name=/runtime.v1.RuntimeService/StopContainer
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.110189698Z" level=warning msg="Stopping container eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=256916df-d760-4350-9e08-dfc3a2296c61 name=/runtime.v1.RuntimeService/StopContainer
	Dec 06 18:05:15 addons-906021 conmon[6411]: conmon eda999d38be156c80cac <ninfo>: container 6423 exited with status 137
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.254477856Z" level=info msg="Stopped container eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56: ingress-nginx/ingress-nginx-controller-7c6974c4d8-bchdk/controller" id=256916df-d760-4350-9e08-dfc3a2296c61 name=/runtime.v1.RuntimeService/StopContainer
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.254982400Z" level=info msg="Stopping pod sandbox: a521f253437ada3acf345e3db979c0f4003a9b027751d2dd3b4b1414d4fee20e" id=54f8745d-26d5-4501-a999-95d4b884b649 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.257890310Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-UOP6ROEENFXUOLT7 - [0:0]\n:KUBE-HP-B7CQOLCTCAU5QVQX - [0:0]\n-X KUBE-HP-B7CQOLCTCAU5QVQX\n-X KUBE-HP-UOP6ROEENFXUOLT7\nCOMMIT\n"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.259198017Z" level=info msg="Closing host port tcp:80"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.259236960Z" level=info msg="Closing host port tcp:443"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.260666902Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.260685696Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.260811883Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-bchdk Namespace:ingress-nginx ID:a521f253437ada3acf345e3db979c0f4003a9b027751d2dd3b4b1414d4fee20e UID:810a9337-af07-402b-be8f-bd24b487561f NetNS:/var/run/netns/0682ecbd-3e46-4a14-bfc1-8319f99c1aca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.260925411Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-bchdk from CNI network \"kindnet\" (type=ptp)"
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.289581368Z" level=info msg="Stopped pod sandbox: a521f253437ada3acf345e3db979c0f4003a9b027751d2dd3b4b1414d4fee20e" id=54f8745d-26d5-4501-a999-95d4b884b649 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.571584114Z" level=info msg="Removing container: eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56" id=997c87f7-c591-454f-a9ab-da0b0c12da7a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 18:05:15 addons-906021 crio[950]: time="2023-12-06 18:05:15.586369580Z" level=info msg="Removed container eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56: ingress-nginx/ingress-nginx-controller-7c6974c4d8-bchdk/controller" id=997c87f7-c591-454f-a9ab-da0b0c12da7a name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7fa1f244866d       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   3615bb033bc79       hello-world-app-5d77478584-djpgf
	18bae089fb375       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   c11daca4c8799       headlamp-777fd4b855-s2zs8
	ddc84e64388f0       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   477d2a7728459       nginx
	10b88ed71a7c6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   2a7390654a345       gcp-auth-d4c87556c-wcvjx
	c119b5aafe73e       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             2 minutes ago       Exited              patch                     2                   1e31721dca317       ingress-nginx-admission-patch-42dgf
	61143e8eabbeb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   4a5a503dbdf9b       ingress-nginx-admission-create-h9wfl
	6f3789673758a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   44e810bed9f90       local-path-provisioner-78b46b4d5c-nsjs4
	2f685528bb4a5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   026c5685f3efa       coredns-5dd5756b68-gbtqj
	f8616d11c7e2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   d23c30c23941f       storage-provisioner
	1f673af8b9978       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             3 minutes ago       Running             kube-proxy                0                   b9e12174ae86d       kube-proxy-t2vs7
	30af1852ecb11       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             3 minutes ago       Running             kindnet-cni               0                   7332948e0d06e       kindnet-j9vqn
	79890cd8bf2d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   29bf56ce16ef2       etcd-addons-906021
	249657d2091ff       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   b390108703f22       kube-scheduler-addons-906021
	2f8dc468d8700       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   cf39c0dcc8337       kube-controller-manager-addons-906021
	b7b4fddfb716b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   958fb53f97829       kube-apiserver-addons-906021
	
	* 
	* ==> coredns [2f685528bb4a5f0ea24810076754ec5c3d0d0db63513c451fb8ed624387dca0a] <==
	* [INFO] 10.244.0.10:51082 - 17232 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004743s
	[INFO] 10.244.0.10:41130 - 12465 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004820597s
	[INFO] 10.244.0.10:41130 - 5556 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005187748s
	[INFO] 10.244.0.10:42290 - 22605 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003588595s
	[INFO] 10.244.0.10:42290 - 37448 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004644181s
	[INFO] 10.244.0.10:38354 - 22180 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004662232s
	[INFO] 10.244.0.10:38354 - 9383 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006850924s
	[INFO] 10.244.0.10:43411 - 60911 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092277s
	[INFO] 10.244.0.10:43411 - 7403 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151498s
	[INFO] 10.244.0.20:42201 - 16084 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175422s
	[INFO] 10.244.0.20:47641 - 52086 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00024373s
	[INFO] 10.244.0.20:55685 - 4667 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130877s
	[INFO] 10.244.0.20:48726 - 19806 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000175514s
	[INFO] 10.244.0.20:56144 - 34419 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138375s
	[INFO] 10.244.0.20:44957 - 31672 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129807s
	[INFO] 10.244.0.20:33552 - 61606 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007892662s
	[INFO] 10.244.0.20:57653 - 59820 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00822051s
	[INFO] 10.244.0.20:47495 - 46025 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006635786s
	[INFO] 10.244.0.20:37238 - 52854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008075961s
	[INFO] 10.244.0.20:39980 - 42432 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006322528s
	[INFO] 10.244.0.20:56811 - 5815 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007879885s
	[INFO] 10.244.0.20:55006 - 38708 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000710456s
	[INFO] 10.244.0.20:40302 - 20182 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.000639859s
	[INFO] 10.244.0.22:36517 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000199526s
	[INFO] 10.244.0.22:50808 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135662s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-906021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-906021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3
	                    minikube.k8s.io/name=addons-906021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T18_01_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-906021
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-906021
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:05:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:04:13 +0000   Wed, 06 Dec 2023 18:01:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:04:13 +0000   Wed, 06 Dec 2023 18:01:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:04:13 +0000   Wed, 06 Dec 2023 18:01:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:04:13 +0000   Wed, 06 Dec 2023 18:01:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-906021
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 1fbf2807e2cd4c13abf487b42e3b8846
	  System UUID:                15a06a43-c029-4e7b-b528-03df13c0c205
	  Boot ID:                    5f16510a-fcc2-4dea-8318-41aa6150c4de
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-djpgf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-d4c87556c-wcvjx                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  headlamp                    headlamp-777fd4b855-s2zs8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 coredns-5dd5756b68-gbtqj                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-906021                         100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kindnet-j9vqn                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m58s
	  kube-system                 kube-apiserver-addons-906021               250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-addons-906021      200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-proxy-t2vs7                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-scheduler-addons-906021               100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  local-path-storage          local-path-provisioner-78b46b4d5c-nsjs4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node addons-906021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node addons-906021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x8 over 4m17s)  kubelet          Node addons-906021 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet          Node addons-906021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet          Node addons-906021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet          Node addons-906021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m59s                  node-controller  Node addons-906021 event: Registered Node addons-906021 in Controller
	  Normal  NodeReady                3m24s                  kubelet          Node addons-906021 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007716] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003050] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000640] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000621] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000645] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000621] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000661] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000609] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.329422] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 6 18:03] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[  +1.004000] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[  +2.015904] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[  +4.031674] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[  +8.191462] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[ +16.126838] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	[Dec 6 18:04] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 66 69 97 82 97 58 1e 2c 21 ce 56 d7 08 00
	
	* 
	* ==> etcd [79890cd8bf2d5f4d3f2aeaf988421dd316c863fc89061c369bf898513f11ea14] <==
	* {"level":"info","ts":"2023-12-06T18:01:26.02244Z","caller":"traceutil/trace.go:171","msg":"trace[1333997678] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"203.109523ms","start":"2023-12-06T18:01:25.819313Z","end":"2023-12-06T18:01:26.022422Z","steps":["trace[1333997678] 'process raft request'  (duration: 85.124153ms)","trace[1333997678] 'compare'  (duration: 102.165566ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T18:01:26.017108Z","caller":"traceutil/trace.go:171","msg":"trace[623987921] linearizableReadLoop","detail":"{readStateIndex:429; appliedIndex:428; }","duration":"197.186322ms","start":"2023-12-06T18:01:25.819905Z","end":"2023-12-06T18:01:26.017091Z","steps":["trace[623987921] 'read index received'  (duration: 84.484734ms)","trace[623987921] 'applied index is now lower than readState.Index'  (duration: 112.700242ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T18:01:26.023781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.890101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T18:01:26.023812Z","caller":"traceutil/trace.go:171","msg":"trace[1058390888] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"203.93056ms","start":"2023-12-06T18:01:25.819872Z","end":"2023-12-06T18:01:26.023803Z","steps":["trace[1058390888] 'agreement among raft nodes before linearized reading'  (duration: 203.869897ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:01:26.02395Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.920541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T18:01:26.023966Z","caller":"traceutil/trace.go:171","msg":"trace[1465935304] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:419; }","duration":"203.937155ms","start":"2023-12-06T18:01:25.820022Z","end":"2023-12-06T18:01:26.023959Z","steps":["trace[1465935304] 'agreement among raft nodes before linearized reading'  (duration: 203.908099ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:01:26.024142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.944576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-906021\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-12-06T18:01:26.024159Z","caller":"traceutil/trace.go:171","msg":"trace[1044433370] range","detail":"{range_begin:/registry/minions/addons-906021; range_end:; response_count:1; response_revision:419; }","duration":"100.962987ms","start":"2023-12-06T18:01:25.92319Z","end":"2023-12-06T18:01:26.024153Z","steps":["trace[1044433370] 'agreement among raft nodes before linearized reading'  (duration: 100.927136ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-06T18:01:26.024239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.463614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2023-12-06T18:01:26.024251Z","caller":"traceutil/trace.go:171","msg":"trace[1591039662] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5dd5756b68; range_end:; response_count:1; response_revision:419; }","duration":"116.47602ms","start":"2023-12-06T18:01:25.907771Z","end":"2023-12-06T18:01:26.024247Z","steps":["trace[1591039662] 'agreement among raft nodes before linearized reading'  (duration: 116.449213ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:02:06.23119Z","caller":"traceutil/trace.go:171","msg":"trace[1804450093] transaction","detail":"{read_only:false; response_revision:949; number_of_response:1; }","duration":"108.13748ms","start":"2023-12-06T18:02:06.123032Z","end":"2023-12-06T18:02:06.23117Z","steps":["trace[1804450093] 'process raft request'  (duration: 107.683188ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:02:32.611414Z","caller":"traceutil/trace.go:171","msg":"trace[1200277146] linearizableReadLoop","detail":"{readStateIndex:1152; appliedIndex:1151; }","duration":"100.374019ms","start":"2023-12-06T18:02:32.511023Z","end":"2023-12-06T18:02:32.611397Z","steps":["trace[1200277146] 'read index received'  (duration: 100.3139ms)","trace[1200277146] 'applied index is now lower than readState.Index'  (duration: 59.493µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T18:02:32.611538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.51899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-06T18:02:32.611563Z","caller":"traceutil/trace.go:171","msg":"trace[1450455249] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1120; }","duration":"100.562338ms","start":"2023-12-06T18:02:32.510992Z","end":"2023-12-06T18:02:32.611555Z","steps":["trace[1450455249] 'agreement among raft nodes before linearized reading'  (duration: 100.47746ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:02:32.611832Z","caller":"traceutil/trace.go:171","msg":"trace[280032782] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"110.784768ms","start":"2023-12-06T18:02:32.501035Z","end":"2023-12-06T18:02:32.61182Z","steps":["trace[280032782] 'process raft request'  (duration: 110.211562ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:02:38.038205Z","caller":"traceutil/trace.go:171","msg":"trace[866156253] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"117.165227ms","start":"2023-12-06T18:02:37.921015Z","end":"2023-12-06T18:02:38.038181Z","steps":["trace[866156253] 'process raft request'  (duration: 59.216527ms)","trace[866156253] 'compare'  (duration: 57.850964ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T18:03:00.769242Z","caller":"traceutil/trace.go:171","msg":"trace[1794468321] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1305; }","duration":"108.069429ms","start":"2023-12-06T18:03:00.661158Z","end":"2023-12-06T18:03:00.769228Z","steps":["trace[1794468321] 'process raft request'  (duration: 107.973759ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:03:06.044252Z","caller":"traceutil/trace.go:171","msg":"trace[694038259] transaction","detail":"{read_only:false; response_revision:1347; number_of_response:1; }","duration":"132.722192ms","start":"2023-12-06T18:03:05.911509Z","end":"2023-12-06T18:03:06.044231Z","steps":["trace[694038259] 'process raft request'  (duration: 109.589445ms)","trace[694038259] 'compare'  (duration: 23.022732ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T18:03:06.248157Z","caller":"traceutil/trace.go:171","msg":"trace[1502643961] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1395; }","duration":"200.001294ms","start":"2023-12-06T18:03:06.048139Z","end":"2023-12-06T18:03:06.24814Z","steps":["trace[1502643961] 'read index received'  (duration: 117.577181ms)","trace[1502643961] 'applied index is now lower than readState.Index'  (duration: 82.423462ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-06T18:03:06.248232Z","caller":"traceutil/trace.go:171","msg":"trace[2046071209] transaction","detail":"{read_only:false; response_revision:1348; number_of_response:1; }","duration":"200.794993ms","start":"2023-12-06T18:03:06.047415Z","end":"2023-12-06T18:03:06.24821Z","steps":["trace[2046071209] 'process raft request'  (duration: 118.292464ms)","trace[2046071209] 'compare'  (duration: 82.314299ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-06T18:03:06.248347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.210157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/helm-test\" ","response":"range_response_count:1 size:1325"}
	{"level":"info","ts":"2023-12-06T18:03:06.248385Z","caller":"traceutil/trace.go:171","msg":"trace[1023579917] range","detail":"{range_begin:/registry/pods/kube-system/helm-test; range_end:; response_count:1; response_revision:1348; }","duration":"200.263825ms","start":"2023-12-06T18:03:06.048112Z","end":"2023-12-06T18:03:06.248375Z","steps":["trace[1023579917] 'agreement among raft nodes before linearized reading'  (duration: 200.118267ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:03:11.809876Z","caller":"traceutil/trace.go:171","msg":"trace[1470112352] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"106.817748ms","start":"2023-12-06T18:03:11.703034Z","end":"2023-12-06T18:03:11.809852Z","steps":["trace[1470112352] 'process raft request'  (duration: 106.454895ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:03:11.937271Z","caller":"traceutil/trace.go:171","msg":"trace[1510162813] transaction","detail":"{read_only:false; response_revision:1425; number_of_response:1; }","duration":"103.840713ms","start":"2023-12-06T18:03:11.833405Z","end":"2023-12-06T18:03:11.937246Z","steps":["trace[1510162813] 'process raft request'  (duration: 103.60688ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-06T18:03:43.148821Z","caller":"traceutil/trace.go:171","msg":"trace[100908791] transaction","detail":"{read_only:false; response_revision:1526; number_of_response:1; }","duration":"110.995061ms","start":"2023-12-06T18:03:43.037811Z","end":"2023-12-06T18:03:43.148806Z","steps":["trace[100908791] 'process raft request'  (duration: 110.898344ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [10b88ed71a7c60ed16f1ff348610f99ca95950927ab4ee5a0ab1d6b6cc9a27b0] <==
	* 2023/12/06 18:02:31 GCP Auth Webhook started!
	2023/12/06 18:02:50 Ready to marshal response ...
	2023/12/06 18:02:50 Ready to write response ...
	2023/12/06 18:02:54 Ready to marshal response ...
	2023/12/06 18:02:54 Ready to write response ...
	2023/12/06 18:02:59 Ready to marshal response ...
	2023/12/06 18:02:59 Ready to write response ...
	2023/12/06 18:02:59 Ready to marshal response ...
	2023/12/06 18:02:59 Ready to write response ...
	2023/12/06 18:03:05 Ready to marshal response ...
	2023/12/06 18:03:05 Ready to write response ...
	2023/12/06 18:03:07 Ready to marshal response ...
	2023/12/06 18:03:07 Ready to write response ...
	2023/12/06 18:03:08 Ready to marshal response ...
	2023/12/06 18:03:08 Ready to write response ...
	2023/12/06 18:03:08 Ready to marshal response ...
	2023/12/06 18:03:08 Ready to write response ...
	2023/12/06 18:03:08 Ready to marshal response ...
	2023/12/06 18:03:08 Ready to write response ...
	2023/12/06 18:03:36 Ready to marshal response ...
	2023/12/06 18:03:36 Ready to write response ...
	2023/12/06 18:03:52 Ready to marshal response ...
	2023/12/06 18:03:52 Ready to write response ...
	2023/12/06 18:05:10 Ready to marshal response ...
	2023/12/06 18:05:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:05:20 up 47 min,  0 users,  load average: 0.73, 1.32, 0.66
	Linux addons-906021 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [30af1852ecb1172c72df3adb7f82b0b1360c0c5878ee89e5c604b004cd6e43bc] <==
	* I1206 18:03:16.551988       1 main.go:227] handling current node
	I1206 18:03:26.555716       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:03:26.555741       1 main.go:227] handling current node
	I1206 18:03:36.601566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:03:36.601590       1 main.go:227] handling current node
	I1206 18:03:46.605049       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:03:46.605071       1 main.go:227] handling current node
	I1206 18:03:56.617326       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:03:56.617346       1 main.go:227] handling current node
	I1206 18:04:06.627366       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:06.627388       1 main.go:227] handling current node
	I1206 18:04:16.630879       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:16.630902       1 main.go:227] handling current node
	I1206 18:04:26.634673       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:26.634699       1 main.go:227] handling current node
	I1206 18:04:36.638007       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:36.638029       1 main.go:227] handling current node
	I1206 18:04:46.648029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:46.648057       1 main.go:227] handling current node
	I1206 18:04:56.652066       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:04:56.652090       1 main.go:227] handling current node
	I1206 18:05:06.663309       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:05:06.663332       1 main.go:227] handling current node
	I1206 18:05:16.666689       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:05:16.666714       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [b7b4fddfb716b389c7b31c3ab7989dc9490e3d0c7b83e6a3e625a12ad589944f] <==
	* W1206 18:02:51.040506       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1206 18:03:06.516716       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 18:03:08.004357       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:56208: read: connection reset by peer
	I1206 18:03:08.359732       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.27.199"}
	I1206 18:03:46.868954       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 18:04:07.734115       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.734170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.740762       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.740908       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.749071       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.749112       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.752683       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.752740       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.763163       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.763474       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.765951       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.766011       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:07.813452       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 18:04:07.813725       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 18:04:08.336991       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1206 18:04:08.749919       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 18:04:08.814465       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 18:04:08.912382       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1206 18:05:10.396766       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.30.57"}
	E1206 18:05:12.148651       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [2f8dc468d87001b27275bfd6ff90e076fda715e9025f017fcde806f23923ee5e] <==
	* W1206 18:04:26.372365       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:04:26.372394       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:04:28.212634       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:04:28.212667       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:04:38.720782       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:04:38.720813       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:04:39.855202       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:04:39.855233       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:04:42.936599       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:04:42.936626       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1206 18:05:05.866319       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:05:05.866351       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1206 18:05:10.238927       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1206 18:05:10.247970       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-djpgf"
	I1206 18:05:10.254203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.511123ms"
	I1206 18:05:10.264767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.498794ms"
	I1206 18:05:10.264981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="88.651µs"
	I1206 18:05:10.265024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.454µs"
	I1206 18:05:11.589873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.963203ms"
	I1206 18:05:11.590535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="78.429µs"
	I1206 18:05:12.091393       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1206 18:05:12.091994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="8.872µs"
	I1206 18:05:12.096774       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1206 18:05:14.187959       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1206 18:05:14.187988       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [1f673af8b9978341ca236d8a9b6900b843500ffa66afc13f24a38d23e19d041a] <==
	* I1206 18:01:26.210235       1 server_others.go:69] "Using iptables proxy"
	I1206 18:01:26.510162       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1206 18:01:27.219020       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 18:01:27.306193       1 server_others.go:152] "Using iptables Proxier"
	I1206 18:01:27.306329       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 18:01:27.306363       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 18:01:27.306413       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 18:01:27.306663       1 server.go:846] "Version info" version="v1.28.4"
	I1206 18:01:27.306888       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 18:01:27.307757       1 config.go:188] "Starting service config controller"
	I1206 18:01:27.307826       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 18:01:27.307876       1 config.go:97] "Starting endpoint slice config controller"
	I1206 18:01:27.307902       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 18:01:27.308549       1 config.go:315] "Starting node config controller"
	I1206 18:01:27.308599       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 18:01:27.408409       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 18:01:27.408542       1 shared_informer.go:318] Caches are synced for service config
	I1206 18:01:27.410285       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [249657d2091ff396badfda7a79f963f5c245e51c899ece599ffb846aacf4e33d] <==
	* W1206 18:01:06.701596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 18:01:06.701949       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1206 18:01:06.701644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:01:06.701965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 18:01:06.701796       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:01:06.701985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 18:01:07.547320       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:01:07.547357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 18:01:07.561527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:01:07.561551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 18:01:07.597861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 18:01:07.597895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 18:01:07.613191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:01:07.613223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 18:01:07.661660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 18:01:07.661686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 18:01:07.685023       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:01:07.685059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1206 18:01:07.701341       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:01:07.701374       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 18:01:07.743740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 18:01:07.743773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1206 18:01:07.769063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:01:07.769101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1206 18:01:10.724016       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 06 18:05:10 addons-906021 kubelet[1566]: I1206 18:05:10.407460    1566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgdk\" (UniqueName: \"kubernetes.io/projected/1a564201-8efc-4862-bd0f-7ee0e7818d16-kube-api-access-2zgdk\") pod \"hello-world-app-5d77478584-djpgf\" (UID: \"1a564201-8efc-4862-bd0f-7ee0e7818d16\") " pod="default/hello-world-app-5d77478584-djpgf"
	Dec 06 18:05:10 addons-906021 kubelet[1566]: I1206 18:05:10.407531    1566 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1a564201-8efc-4862-bd0f-7ee0e7818d16-gcp-creds\") pod \"hello-world-app-5d77478584-djpgf\" (UID: \"1a564201-8efc-4862-bd0f-7ee0e7818d16\") " pod="default/hello-world-app-5d77478584-djpgf"
	Dec 06 18:05:10 addons-906021 kubelet[1566]: W1206 18:05:10.657089    1566 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ad8d52705d348606ec55cd777e2ce4df06cf6143a2b4514154d564a4e2fc0f9d/crio-3615bb033bc7950921296ff01f2ac6693e9dca24e57928f18575e68f9d6cc1d5 WatchSource:0}: Error finding container 3615bb033bc7950921296ff01f2ac6693e9dca24e57928f18575e68f9d6cc1d5: Status 404 returned error can't find the container with id 3615bb033bc7950921296ff01f2ac6693e9dca24e57928f18575e68f9d6cc1d5
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.413842    1566 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xxxm\" (UniqueName: \"kubernetes.io/projected/3ee35abf-c990-4bcd-976c-1df07596953e-kube-api-access-5xxxm\") pod \"3ee35abf-c990-4bcd-976c-1df07596953e\" (UID: \"3ee35abf-c990-4bcd-976c-1df07596953e\") "
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.415822    1566 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee35abf-c990-4bcd-976c-1df07596953e-kube-api-access-5xxxm" (OuterVolumeSpecName: "kube-api-access-5xxxm") pod "3ee35abf-c990-4bcd-976c-1df07596953e" (UID: "3ee35abf-c990-4bcd-976c-1df07596953e"). InnerVolumeSpecName "kube-api-access-5xxxm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.514732    1566 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5xxxm\" (UniqueName: \"kubernetes.io/projected/3ee35abf-c990-4bcd-976c-1df07596953e-kube-api-access-5xxxm\") on node \"addons-906021\" DevicePath \"\""
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.559421    1566 scope.go:117] "RemoveContainer" containerID="3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8"
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.576855    1566 scope.go:117] "RemoveContainer" containerID="3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8"
	Dec 06 18:05:11 addons-906021 kubelet[1566]: E1206 18:05:11.577335    1566 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8\": container with ID starting with 3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8 not found: ID does not exist" containerID="3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8"
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.577381    1566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8"} err="failed to get container status \"3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8\": rpc error: code = NotFound desc = could not find container \"3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8\": container with ID starting with 3b6a6f48750e622d2ae22d191d8cf97aa79fde2453984be93388cdc7a470bec8 not found: ID does not exist"
	Dec 06 18:05:11 addons-906021 kubelet[1566]: I1206 18:05:11.583085    1566 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-djpgf" podStartSLOduration=0.857975387 podCreationTimestamp="2023-12-06 18:05:10 +0000 UTC" firstStartedPulling="2023-12-06 18:05:10.702801013 +0000 UTC m=+241.468300710" lastFinishedPulling="2023-12-06 18:05:11.427863086 +0000 UTC m=+242.193362770" observedRunningTime="2023-12-06 18:05:11.582830037 +0000 UTC m=+242.348329739" watchObservedRunningTime="2023-12-06 18:05:11.583037447 +0000 UTC m=+242.348537149"
	Dec 06 18:05:13 addons-906021 kubelet[1566]: I1206 18:05:13.325624    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ee35abf-c990-4bcd-976c-1df07596953e" path="/var/lib/kubelet/pods/3ee35abf-c990-4bcd-976c-1df07596953e/volumes"
	Dec 06 18:05:13 addons-906021 kubelet[1566]: I1206 18:05:13.325948    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7bc82536-4f10-4c3b-a5f3-7f5f198823e4" path="/var/lib/kubelet/pods/7bc82536-4f10-4c3b-a5f3-7f5f198823e4/volumes"
	Dec 06 18:05:13 addons-906021 kubelet[1566]: I1206 18:05:13.326240    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91090ac3-b622-41b6-8f9b-6a43fd931c73" path="/var/lib/kubelet/pods/91090ac3-b622-41b6-8f9b-6a43fd931c73/volumes"
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.443768    1566 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7vlk\" (UniqueName: \"kubernetes.io/projected/810a9337-af07-402b-be8f-bd24b487561f-kube-api-access-x7vlk\") pod \"810a9337-af07-402b-be8f-bd24b487561f\" (UID: \"810a9337-af07-402b-be8f-bd24b487561f\") "
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.443822    1566 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/810a9337-af07-402b-be8f-bd24b487561f-webhook-cert\") pod \"810a9337-af07-402b-be8f-bd24b487561f\" (UID: \"810a9337-af07-402b-be8f-bd24b487561f\") "
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.445572    1566 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810a9337-af07-402b-be8f-bd24b487561f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "810a9337-af07-402b-be8f-bd24b487561f" (UID: "810a9337-af07-402b-be8f-bd24b487561f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.445707    1566 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810a9337-af07-402b-be8f-bd24b487561f-kube-api-access-x7vlk" (OuterVolumeSpecName: "kube-api-access-x7vlk") pod "810a9337-af07-402b-be8f-bd24b487561f" (UID: "810a9337-af07-402b-be8f-bd24b487561f"). InnerVolumeSpecName "kube-api-access-x7vlk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.544589    1566 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x7vlk\" (UniqueName: \"kubernetes.io/projected/810a9337-af07-402b-be8f-bd24b487561f-kube-api-access-x7vlk\") on node \"addons-906021\" DevicePath \"\""
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.544631    1566 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/810a9337-af07-402b-be8f-bd24b487561f-webhook-cert\") on node \"addons-906021\" DevicePath \"\""
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.570579    1566 scope.go:117] "RemoveContainer" containerID="eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56"
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.586609    1566 scope.go:117] "RemoveContainer" containerID="eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56"
	Dec 06 18:05:15 addons-906021 kubelet[1566]: E1206 18:05:15.586972    1566 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56\": container with ID starting with eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56 not found: ID does not exist" containerID="eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56"
	Dec 06 18:05:15 addons-906021 kubelet[1566]: I1206 18:05:15.587026    1566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56"} err="failed to get container status \"eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56\": rpc error: code = NotFound desc = could not find container \"eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56\": container with ID starting with eda999d38be156c80caccd943ce886c5d200554d50a7340bb645afe6d67dad56 not found: ID does not exist"
	Dec 06 18:05:17 addons-906021 kubelet[1566]: I1206 18:05:17.325198    1566 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="810a9337-af07-402b-be8f-bd24b487561f" path="/var/lib/kubelet/pods/810a9337-af07-402b-be8f-bd24b487561f/volumes"
	
	* 
	* ==> storage-provisioner [f8616d11c7e2db6abb7ac0f1cdbf5f460ec5f66d1683ddba4819e129c8aca9de] <==
	* I1206 18:01:57.940908       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 18:01:57.950237       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 18:01:57.950315       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 18:01:58.007602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 18:01:58.007666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b23efbb7-cf84-4d14-bc88-6c748dab830d", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-906021_1445abfc-8258-45f3-b7a5-28519febc955 became leader
	I1206 18:01:58.007802       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-906021_1445abfc-8258-45f3-b7a5-28519febc955!
	I1206 18:01:58.108875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-906021_1445abfc-8258-45f3-b7a5-28519febc955!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-906021 -n addons-906021
helpers_test.go:261: (dbg) Run:  kubectl --context addons-906021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-099068 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-099068 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.573537607s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-099068 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-099068 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [27556109-1490-450c-8d01-9e289ba422da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [27556109-1490-450c-8d01-9e289ba422da] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.008064704s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1206 18:12:44.479575   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:13:12.163379   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-099068 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.128942279s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-099068 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.005486085s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons disable ingress --alsologtostderr -v=1
E1206 18:13:52.904415   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:52.909696   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:52.919947   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:52.940245   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:52.980541   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:53.060864   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:53.221256   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:53.541549   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:54.182385   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:13:55.462654   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons disable ingress --alsologtostderr -v=1: (7.412299825s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-099068
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-099068:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a",
	        "Created": "2023-12-06T18:09:56.576942391Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56991,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:09:56.854122093Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:75d04379c0692a7c7580bf47e8a90f896e08db4459e8feaaa815f73da348a8e2",
	        "ResolvConfPath": "/var/lib/docker/containers/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a/hostname",
	        "HostsPath": "/var/lib/docker/containers/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a/hosts",
	        "LogPath": "/var/lib/docker/containers/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a-json.log",
	        "Name": "/ingress-addon-legacy-099068",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-099068:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-099068",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b22f6b224ead951061bdbe9879825722fa43a8c89ef81dea974f721c226beecb-init/diff:/var/lib/docker/overlay2/ec06e12da6157da3a94af2b1665e4c856c3ea27be6944a5fef4fd2886cc68e28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b22f6b224ead951061bdbe9879825722fa43a8c89ef81dea974f721c226beecb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b22f6b224ead951061bdbe9879825722fa43a8c89ef81dea974f721c226beecb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b22f6b224ead951061bdbe9879825722fa43a8c89ef81dea974f721c226beecb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-099068",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-099068/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-099068",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-099068",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-099068",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7d1510d567e446cba12332fde9720f41924889e9338cefe43bc2e88cb2aa128",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a7d1510d567e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-099068": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d344d27313d3",
	                        "ingress-addon-legacy-099068"
	                    ],
	                    "NetworkID": "8715661b1a933dbe957ec0cb9d14dd1659cbf2695e0284ad3de244ebf9fae97b",
	                    "EndpointID": "d75b909995cc0855b73e94784038d1d7e11b16c3ef41196aac6b75550351987a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-099068 -n ingress-addon-legacy-099068
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 logs -n 25
E1206 18:13:58.023668   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-099068 logs -n 25: (1.05557964s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                            Args                            |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-785345 ssh sudo                                 | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC |                     |
	|                | systemctl is-active containerd                             |                             |         |         |                     |                     |
	| cp             | functional-785345 cp                                       | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | testdata/cp-test.txt                                       |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                   |                             |         |         |                     |                     |
	| ssh            | functional-785345 ssh echo                                 | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | hello                                                      |                             |         |         |                     |                     |
	| ssh            | functional-785345 ssh -n                                   | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | functional-785345 sudo cat                                 |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                   |                             |         |         |                     |                     |
	| ssh            | functional-785345 ssh cat                                  | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | /etc/hostname                                              |                             |         |         |                     |                     |
	| cp             | functional-785345 cp                                       | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | functional-785345:/home/docker/cp-test.txt                 |                             |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd2217031248/001/cp-test.txt |                             |         |         |                     |                     |
	| ssh            | functional-785345 ssh -n                                   | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | functional-785345 sudo cat                                 |                             |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                   |                             |         |         |                     |                     |
	| image          | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | image ls --format short                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | image ls --format yaml                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                          |                             |         |         |                     |                     |
	| ssh            | functional-785345 ssh pgrep                                | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC |                     |
	|                | buildkitd                                                  |                             |         |         |                     |                     |
	| image          | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | image ls --format json                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-785345 image build -t                           | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | localhost/my-image:functional-785345                       |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                           |                             |         |         |                     |                     |
	| image          | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | image ls --format table                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                          |                             |         |         |                     |                     |
	| update-context | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | update-context                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                     |                             |         |         |                     |                     |
	| update-context | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | update-context                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                     |                             |         |         |                     |                     |
	| update-context | functional-785345                                          | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	|                | update-context                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                     |                             |         |         |                     |                     |
	| image          | functional-785345 image ls                                 | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	| delete         | -p functional-785345                                       | functional-785345           | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:09 UTC |
	| start          | -p ingress-addon-legacy-099068                             | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:09 UTC | 06 Dec 23 18:10 UTC |
	|                | --kubernetes-version=v1.18.20                              |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                          |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                       |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-099068                                | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:10 UTC | 06 Dec 23 18:10 UTC |
	|                | addons enable ingress                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-099068                                | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:10 UTC | 06 Dec 23 18:10 UTC |
	|                | addons enable ingress-dns                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                     |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-099068                                | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:11 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                              |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                               |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-099068 ip                             | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:13 UTC | 06 Dec 23 18:13 UTC |
	| addons         | ingress-addon-legacy-099068                                | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:13 UTC | 06 Dec 23 18:13 UTC |
	|                | addons disable ingress-dns                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-099068                                | ingress-addon-legacy-099068 | jenkins | v1.32.0 | 06 Dec 23 18:13 UTC | 06 Dec 23 18:13 UTC |
	|                | addons disable ingress                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                     |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:09:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:09:43.154641   56352 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:09:43.154787   56352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:43.154810   56352 out.go:309] Setting ErrFile to fd 2...
	I1206 18:09:43.154817   56352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:43.155040   56352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:09:43.155725   56352 out.go:303] Setting JSON to false
	I1206 18:09:43.156842   56352 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3132,"bootTime":1701883051,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:09:43.156913   56352 start.go:138] virtualization: kvm guest
	I1206 18:09:43.159811   56352 out.go:177] * [ingress-addon-legacy-099068] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:09:43.161532   56352 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:09:43.163038   56352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:09:43.161564   56352 notify.go:220] Checking for updates...
	I1206 18:09:43.165940   56352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:09:43.167573   56352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:09:43.169241   56352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:09:43.170867   56352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:09:43.172697   56352 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:09:43.193624   56352 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:09:43.193724   56352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:09:43.246103   56352 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-06 18:09:43.237138993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:09:43.246229   56352 docker.go:295] overlay module found
	I1206 18:09:43.248465   56352 out.go:177] * Using the docker driver based on user configuration
	I1206 18:09:43.250077   56352 start.go:298] selected driver: docker
	I1206 18:09:43.250092   56352 start.go:902] validating driver "docker" against <nil>
	I1206 18:09:43.250105   56352 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:09:43.250893   56352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:09:43.303261   56352 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-06 18:09:43.294692202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:09:43.303641   56352 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:09:43.303944   56352 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:09:43.306505   56352 out.go:177] * Using Docker driver with root privileges
	I1206 18:09:43.307978   56352 cni.go:84] Creating CNI manager for ""
	I1206 18:09:43.308003   56352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:09:43.308012   56352 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 18:09:43.308024   56352 start_flags.go:323] config:
	{Name:ingress-addon-legacy-099068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-099068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:09:43.309982   56352 out.go:177] * Starting control plane node ingress-addon-legacy-099068 in cluster ingress-addon-legacy-099068
	I1206 18:09:43.311436   56352 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:09:43.312937   56352 out.go:177] * Pulling base image ...
	I1206 18:09:43.314342   56352 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:09:43.314437   56352 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:09:43.330983   56352 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:09:43.331009   56352 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	I1206 18:09:43.332791   56352 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1206 18:09:43.332828   56352 cache.go:56] Caching tarball of preloaded images
	I1206 18:09:43.333035   56352 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:09:43.335282   56352 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1206 18:09:43.336802   56352 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:09:43.360103   56352 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1206 18:09:48.308183   56352 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:09:48.308297   56352 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:09:49.314979   56352 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1206 18:09:49.315311   56352 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/config.json ...
	I1206 18:09:49.315340   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/config.json: {Name:mk8e76dcf80fdcf52b912866dfa61e9d7501c574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:09:49.315529   56352 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:09:49.315554   56352 start.go:365] acquiring machines lock for ingress-addon-legacy-099068: {Name:mk0be2b4e1b3e60392e6a1a60fd1d696607ad709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:09:49.315602   56352 start.go:369] acquired machines lock for "ingress-addon-legacy-099068" in 36.717µs
	I1206 18:09:49.315621   56352 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-099068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-099068 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:09:49.315680   56352 start.go:125] createHost starting for "" (driver="docker")
	I1206 18:09:49.319189   56352 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 18:09:49.319418   56352 start.go:159] libmachine.API.Create for "ingress-addon-legacy-099068" (driver="docker")
	I1206 18:09:49.319447   56352 client.go:168] LocalClient.Create starting
	I1206 18:09:49.319513   56352 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem
	I1206 18:09:49.319548   56352 main.go:141] libmachine: Decoding PEM data...
	I1206 18:09:49.319573   56352 main.go:141] libmachine: Parsing certificate...
	I1206 18:09:49.319627   56352 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem
	I1206 18:09:49.319646   56352 main.go:141] libmachine: Decoding PEM data...
	I1206 18:09:49.319658   56352 main.go:141] libmachine: Parsing certificate...
	I1206 18:09:49.319943   56352 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-099068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 18:09:49.336162   56352 cli_runner.go:211] docker network inspect ingress-addon-legacy-099068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 18:09:49.336243   56352 network_create.go:281] running [docker network inspect ingress-addon-legacy-099068] to gather additional debugging logs...
	I1206 18:09:49.336286   56352 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-099068
	W1206 18:09:49.350893   56352 cli_runner.go:211] docker network inspect ingress-addon-legacy-099068 returned with exit code 1
	I1206 18:09:49.350926   56352 network_create.go:284] error running [docker network inspect ingress-addon-legacy-099068]: docker network inspect ingress-addon-legacy-099068: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-099068 not found
	I1206 18:09:49.350945   56352 network_create.go:286] output of [docker network inspect ingress-addon-legacy-099068]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-099068 not found
	
	** /stderr **
	I1206 18:09:49.351052   56352 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:09:49.366329   56352 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00270f760}
	I1206 18:09:49.366362   56352 network_create.go:124] attempt to create docker network ingress-addon-legacy-099068 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 18:09:49.366400   56352 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-099068 ingress-addon-legacy-099068
	I1206 18:09:49.416780   56352 network_create.go:108] docker network ingress-addon-legacy-099068 192.168.49.0/24 created
	I1206 18:09:49.416820   56352 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-099068" container
	I1206 18:09:49.416956   56352 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:09:49.431566   56352 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-099068 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-099068 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:09:49.450063   56352 oci.go:103] Successfully created a docker volume ingress-addon-legacy-099068
	I1206 18:09:49.450142   56352 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-099068-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-099068 --entrypoint /usr/bin/test -v ingress-addon-legacy-099068:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib
	I1206 18:09:51.189126   56352 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-099068-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-099068 --entrypoint /usr/bin/test -v ingress-addon-legacy-099068:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib: (1.738919511s)
	I1206 18:09:51.189160   56352 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-099068
	I1206 18:09:51.189177   56352 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:09:51.189198   56352 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 18:09:51.189254   56352 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-099068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 18:09:56.513949   56352 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-099068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.324646748s)
	I1206 18:09:56.513984   56352 kic.go:203] duration metric: took 5.324782 seconds to extract preloaded images to volume
	W1206 18:09:56.514117   56352 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:09:56.514230   56352 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:09:56.562174   56352 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-099068 --name ingress-addon-legacy-099068 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-099068 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-099068 --network ingress-addon-legacy-099068 --ip 192.168.49.2 --volume ingress-addon-legacy-099068:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:09:56.862543   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Running}}
	I1206 18:09:56.880984   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:09:56.898500   56352 cli_runner.go:164] Run: docker exec ingress-addon-legacy-099068 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:09:56.947744   56352 oci.go:144] the created container "ingress-addon-legacy-099068" has a running status.
	I1206 18:09:56.947785   56352 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa...
	I1206 18:09:57.213140   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 18:09:57.213181   56352 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:09:57.232516   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:09:57.255083   56352 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:09:57.255110   56352 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-099068 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:09:57.321928   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:09:57.340449   56352 machine.go:88] provisioning docker machine ...
	I1206 18:09:57.340490   56352 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-099068"
	I1206 18:09:57.340551   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:57.360211   56352 main.go:141] libmachine: Using SSH client type: native
	I1206 18:09:57.360640   56352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1206 18:09:57.360668   56352 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-099068 && echo "ingress-addon-legacy-099068" | sudo tee /etc/hostname
	I1206 18:09:57.586095   56352 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-099068
	
	I1206 18:09:57.586176   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:57.603836   56352 main.go:141] libmachine: Using SSH client type: native
	I1206 18:09:57.604226   56352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1206 18:09:57.604260   56352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-099068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-099068/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-099068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:09:57.728082   56352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:09:57.728114   56352 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:09:57.728137   56352 ubuntu.go:177] setting up certificates
	I1206 18:09:57.728148   56352 provision.go:83] configureAuth start
	I1206 18:09:57.728196   56352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-099068
	I1206 18:09:57.743843   56352 provision.go:138] copyHostCerts
	I1206 18:09:57.743878   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:09:57.743912   56352 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:09:57.743925   56352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:09:57.743986   56352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:09:57.744057   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:09:57.744074   56352 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:09:57.744081   56352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:09:57.744102   56352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:09:57.744149   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:09:57.744164   56352 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:09:57.744170   56352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:09:57.744189   56352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:09:57.744232   56352 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-099068 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-099068]
	I1206 18:09:57.863009   56352 provision.go:172] copyRemoteCerts
	I1206 18:09:57.863078   56352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:09:57.863113   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:57.879402   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:09:57.968461   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 18:09:57.968534   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1206 18:09:57.989454   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 18:09:57.989518   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 18:09:58.010509   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 18:09:58.010589   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:09:58.031664   56352 provision.go:86] duration metric: configureAuth took 303.504922ms
	I1206 18:09:58.031693   56352 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:09:58.031871   56352 config.go:182] Loaded profile config "ingress-addon-legacy-099068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1206 18:09:58.031981   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:58.048410   56352 main.go:141] libmachine: Using SSH client type: native
	I1206 18:09:58.048885   56352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1206 18:09:58.048907   56352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:09:58.276532   56352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:09:58.276571   56352 machine.go:91] provisioned docker machine in 936.095752ms
	I1206 18:09:58.276585   56352 client.go:171] LocalClient.Create took 8.957130426s
	I1206 18:09:58.276613   56352 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-099068" took 8.957195326s
	I1206 18:09:58.276629   56352 start.go:300] post-start starting for "ingress-addon-legacy-099068" (driver="docker")
	I1206 18:09:58.276648   56352 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:09:58.276720   56352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:09:58.276781   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:58.292372   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:09:58.380511   56352 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:09:58.383356   56352 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:09:58.383398   56352 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:09:58.383414   56352 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:09:58.383425   56352 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:09:58.383443   56352 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:09:58.383509   56352 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:09:58.383610   56352 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:09:58.383633   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /etc/ssl/certs/163462.pem
	I1206 18:09:58.383755   56352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:09:58.391119   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:09:58.412062   56352 start.go:303] post-start completed in 135.413263ms
	I1206 18:09:58.412415   56352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-099068
	I1206 18:09:58.427845   56352 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/config.json ...
	I1206 18:09:58.428079   56352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:09:58.428124   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:58.443913   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:09:58.529064   56352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:09:58.533063   56352 start.go:128] duration metric: createHost completed in 9.217368526s
	I1206 18:09:58.533092   56352 start.go:83] releasing machines lock for "ingress-addon-legacy-099068", held for 9.217478102s
	I1206 18:09:58.533160   56352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-099068
	I1206 18:09:58.550610   56352 ssh_runner.go:195] Run: cat /version.json
	I1206 18:09:58.550661   56352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:09:58.550677   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:58.550742   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:09:58.569521   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:09:58.570842   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:09:58.743847   56352 ssh_runner.go:195] Run: systemctl --version
	I1206 18:09:58.747963   56352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:09:58.884903   56352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:09:58.889008   56352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:09:58.907446   56352 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:09:58.907538   56352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:09:58.933613   56352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:09:58.933641   56352 start.go:475] detecting cgroup driver to use...
	I1206 18:09:58.933677   56352 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:09:58.933728   56352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:09:58.948023   56352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:09:58.958062   56352 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:09:58.958119   56352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:09:58.970441   56352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:09:58.982647   56352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:09:59.059844   56352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:09:59.140546   56352 docker.go:219] disabling docker service ...
	I1206 18:09:59.140624   56352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:09:59.157390   56352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:09:59.167770   56352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:09:59.240729   56352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:09:59.321006   56352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:09:59.331121   56352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:09:59.345655   56352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1206 18:09:59.345714   56352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:09:59.354393   56352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:09:59.354450   56352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:09:59.363334   56352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:09:59.372006   56352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:09:59.380871   56352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:09:59.389203   56352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:09:59.396696   56352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:09:59.404244   56352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:09:59.475428   56352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:09:59.573078   56352 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:09:59.573137   56352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:09:59.576508   56352 start.go:543] Will wait 60s for crictl version
	I1206 18:09:59.576554   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:09:59.579506   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:09:59.612755   56352 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 18:09:59.612832   56352 ssh_runner.go:195] Run: crio --version
	I1206 18:09:59.645818   56352 ssh_runner.go:195] Run: crio --version
	I1206 18:09:59.684224   56352 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1206 18:09:59.685647   56352 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-099068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:09:59.701495   56352 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 18:09:59.705056   56352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:09:59.715109   56352 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1206 18:09:59.715170   56352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:09:59.757984   56352 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1206 18:09:59.758056   56352 ssh_runner.go:195] Run: which lz4
	I1206 18:09:59.761450   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1206 18:09:59.761530   56352 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1206 18:09:59.764582   56352 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 18:09:59.764610   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1206 18:10:00.663690   56352 crio.go:444] Took 0.902157 seconds to copy over tarball
	I1206 18:10:00.663753   56352 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 18:10:02.901234   56352 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.237392813s)
	I1206 18:10:02.901279   56352 crio.go:451] Took 2.237559 seconds to extract the tarball
	I1206 18:10:02.901288   56352 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 18:10:02.970071   56352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:10:03.000608   56352 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1206 18:10:03.000639   56352 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 18:10:03.000716   56352 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:10:03.000717   56352 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:10:03.000761   56352 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:10:03.000780   56352 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1206 18:10:03.000809   56352 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:10:03.000756   56352 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1206 18:10:03.000758   56352 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:10:03.000786   56352 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:10:03.001936   56352 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1206 18:10:03.001950   56352 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:10:03.001941   56352 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:10:03.001963   56352 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:10:03.001963   56352 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1206 18:10:03.002025   56352 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:10:03.002131   56352 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:10:03.002141   56352 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:10:03.170401   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1206 18:10:03.170757   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:10:03.178736   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1206 18:10:03.181191   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:10:03.192595   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:10:03.211487   56352 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1206 18:10:03.211507   56352 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1206 18:10:03.211530   56352 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1206 18:10:03.211530   56352 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:10:03.211571   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.211571   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.214560   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:10:03.230209   56352 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1206 18:10:03.230261   56352 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1206 18:10:03.230304   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.230686   56352 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1206 18:10:03.230726   56352 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:10:03.230775   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.239245   56352 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1206 18:10:03.239285   56352 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:10:03.239323   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.239349   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1206 18:10:03.239365   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1206 18:10:03.248747   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1206 18:10:03.307110   56352 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1206 18:10:03.307201   56352 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:10:03.307252   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.307297   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1206 18:10:03.307258   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1206 18:10:03.307348   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1206 18:10:03.329908   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1206 18:10:03.329959   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1206 18:10:03.418783   56352 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1206 18:10:03.418828   56352 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1206 18:10:03.418828   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1206 18:10:03.418873   56352 ssh_runner.go:195] Run: which crictl
	I1206 18:10:03.421022   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1206 18:10:03.421084   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1206 18:10:03.421102   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1206 18:10:03.422169   56352 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1206 18:10:03.444785   56352 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:10:03.451211   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1206 18:10:03.453570   56352 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1206 18:10:03.622177   56352 cache_images.go:92] LoadImages completed in 621.520066ms
	W1206 18:10:03.622251   56352 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1206 18:10:03.622307   56352 ssh_runner.go:195] Run: crio config
	I1206 18:10:03.662779   56352 cni.go:84] Creating CNI manager for ""
	I1206 18:10:03.662802   56352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:10:03.662818   56352 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:10:03.662834   56352 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-099068 NodeName:ingress-addon-legacy-099068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1206 18:10:03.662966   56352 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-099068"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:10:03.663057   56352 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-099068 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-099068 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:10:03.663115   56352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1206 18:10:03.671915   56352 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:10:03.671998   56352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:10:03.679841   56352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1206 18:10:03.695552   56352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1206 18:10:03.711818   56352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1206 18:10:03.727935   56352 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 18:10:03.731347   56352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:10:03.741654   56352 certs.go:56] Setting up /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068 for IP: 192.168.49.2
	I1206 18:10:03.741738   56352 certs.go:190] acquiring lock for shared ca certs: {Name:mk88da27ec99c860f0c2ad3f4fab21b90cf40c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:03.741895   56352 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key
	I1206 18:10:03.741954   56352 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key
	I1206 18:10:03.742014   56352 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key
	I1206 18:10:03.742031   56352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt with IP's: []
	I1206 18:10:03.844516   56352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt ...
	I1206 18:10:03.844554   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: {Name:mk6bc984df3a064e273e18ddbde79bed5d42a85e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:03.844731   56352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key ...
	I1206 18:10:03.844750   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key: {Name:mk2556dfaba70c18c2258d9c1f52492caebf2f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:03.844825   56352 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key.dd3b5fb2
	I1206 18:10:03.844844   56352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:10:04.120636   56352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt.dd3b5fb2 ...
	I1206 18:10:04.120671   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt.dd3b5fb2: {Name:mk652f271426e5f18ae323cb0a84e2eefdee3cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:04.120836   56352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key.dd3b5fb2 ...
	I1206 18:10:04.120849   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key.dd3b5fb2: {Name:mk4feb83de499bda39311404e9458da436fc0dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:04.120918   56352 certs.go:337] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt
	I1206 18:10:04.121010   56352 certs.go:341] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key
	I1206 18:10:04.121067   56352 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.key
	I1206 18:10:04.121081   56352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.crt with IP's: []
	I1206 18:10:04.361466   56352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.crt ...
	I1206 18:10:04.361503   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.crt: {Name:mk1f7757a6a96cb1f0e12a175b9625c3cef51afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:04.361666   56352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.key ...
	I1206 18:10:04.361684   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.key: {Name:mkefbe943609eaa30f867b9c1a19574e48732359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:04.361808   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 18:10:04.361831   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 18:10:04.361841   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 18:10:04.361853   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 18:10:04.361862   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 18:10:04.361875   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 18:10:04.361887   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 18:10:04.361899   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 18:10:04.361951   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem (1338 bytes)
	W1206 18:10:04.361994   56352 certs.go:433] ignoring /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346_empty.pem, impossibly tiny 0 bytes
	I1206 18:10:04.362007   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 18:10:04.362035   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem (1078 bytes)
	I1206 18:10:04.362057   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:10:04.362078   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem (1675 bytes)
	I1206 18:10:04.362117   56352 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:10:04.362143   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:10:04.362158   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem -> /usr/share/ca-certificates/16346.pem
	I1206 18:10:04.362172   56352 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /usr/share/ca-certificates/163462.pem
	I1206 18:10:04.362738   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:10:04.384723   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 18:10:04.405453   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:10:04.425863   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 18:10:04.446997   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:10:04.468086   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 18:10:04.489941   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:10:04.511446   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:10:04.532177   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:10:04.553749   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem --> /usr/share/ca-certificates/16346.pem (1338 bytes)
	I1206 18:10:04.574994   56352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /usr/share/ca-certificates/163462.pem (1708 bytes)
	I1206 18:10:04.596524   56352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:10:04.612587   56352 ssh_runner.go:195] Run: openssl version
	I1206 18:10:04.617574   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:10:04.625949   56352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:10:04.629085   56352 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:10:04.629145   56352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:10:04.635187   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:10:04.643490   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16346.pem && ln -fs /usr/share/ca-certificates/16346.pem /etc/ssl/certs/16346.pem"
	I1206 18:10:04.651777   56352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16346.pem
	I1206 18:10:04.654765   56352 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:06 /usr/share/ca-certificates/16346.pem
	I1206 18:10:04.654819   56352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16346.pem
	I1206 18:10:04.660918   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16346.pem /etc/ssl/certs/51391683.0"
	I1206 18:10:04.669008   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163462.pem && ln -fs /usr/share/ca-certificates/163462.pem /etc/ssl/certs/163462.pem"
	I1206 18:10:04.677099   56352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163462.pem
	I1206 18:10:04.680063   56352 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:06 /usr/share/ca-certificates/163462.pem
	I1206 18:10:04.680131   56352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163462.pem
	I1206 18:10:04.686048   56352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163462.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 18:10:04.694731   56352 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:10:04.697719   56352 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:10:04.697768   56352 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-099068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-099068 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:10:04.697855   56352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 18:10:04.697921   56352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 18:10:04.729513   56352 cri.go:89] found id: ""
	I1206 18:10:04.729597   56352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:10:04.737578   56352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:10:04.745244   56352 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1206 18:10:04.745294   56352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:10:04.752745   56352 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:10:04.752788   56352 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 18:10:04.794064   56352 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1206 18:10:04.794137   56352 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:10:04.831750   56352 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:10:04.831841   56352 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1206 18:10:04.831889   56352 kubeadm.go:322] OS: Linux
	I1206 18:10:04.831963   56352 kubeadm.go:322] CGROUPS_CPU: enabled
	I1206 18:10:04.832046   56352 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1206 18:10:04.832134   56352 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1206 18:10:04.832217   56352 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1206 18:10:04.832322   56352 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1206 18:10:04.832400   56352 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1206 18:10:04.897838   56352 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:10:04.897953   56352 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:10:04.898068   56352 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:10:05.075312   56352 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:10:05.076123   56352 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:10:05.076233   56352 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:10:05.147760   56352 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:10:05.151893   56352 out.go:204]   - Generating certificates and keys ...
	I1206 18:10:05.152108   56352 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:10:05.152220   56352 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:10:05.260209   56352 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:10:05.406242   56352 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:10:05.475626   56352 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:10:05.677785   56352 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:10:05.881178   56352 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:10:05.881322   56352 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-099068 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:10:05.954414   56352 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:10:05.954553   56352 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-099068 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 18:10:06.117424   56352 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:10:06.210003   56352 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:10:06.291620   56352 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:10:06.291714   56352 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:10:06.385007   56352 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:10:06.617353   56352 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:10:06.836578   56352 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:10:07.228489   56352 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:10:07.229250   56352 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:10:07.231630   56352 out.go:204]   - Booting up control plane ...
	I1206 18:10:07.231783   56352 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:10:07.234971   56352 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:10:07.236834   56352 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:10:07.237690   56352 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:10:07.239792   56352 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:10:13.741936   56352 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502098 seconds
	I1206 18:10:13.742133   56352 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:10:13.752416   56352 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:10:14.269876   56352 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:10:14.270046   56352 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-099068 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1206 18:10:14.778149   56352 kubeadm.go:322] [bootstrap-token] Using token: qrw0iy.eeir5dfaonczha4c
	I1206 18:10:14.780147   56352 out.go:204]   - Configuring RBAC rules ...
	I1206 18:10:14.780380   56352 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:10:14.785986   56352 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:10:14.791604   56352 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:10:14.793399   56352 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:10:14.795285   56352 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:10:14.797017   56352 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:10:14.803722   56352 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:10:15.029890   56352 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 18:10:15.223308   56352 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 18:10:15.224823   56352 kubeadm.go:322] 
	I1206 18:10:15.224932   56352 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 18:10:15.224958   56352 kubeadm.go:322] 
	I1206 18:10:15.225026   56352 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 18:10:15.225039   56352 kubeadm.go:322] 
	I1206 18:10:15.225083   56352 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 18:10:15.225196   56352 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:10:15.225283   56352 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:10:15.225304   56352 kubeadm.go:322] 
	I1206 18:10:15.225375   56352 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 18:10:15.225484   56352 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:10:15.225586   56352 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:10:15.225600   56352 kubeadm.go:322] 
	I1206 18:10:15.225716   56352 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:10:15.225835   56352 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 18:10:15.225851   56352 kubeadm.go:322] 
	I1206 18:10:15.225958   56352 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qrw0iy.eeir5dfaonczha4c \
	I1206 18:10:15.226081   56352 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 \
	I1206 18:10:15.226110   56352 kubeadm.go:322]     --control-plane 
	I1206 18:10:15.226131   56352 kubeadm.go:322] 
	I1206 18:10:15.226246   56352 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:10:15.226262   56352 kubeadm.go:322] 
	I1206 18:10:15.226383   56352 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qrw0iy.eeir5dfaonczha4c \
	I1206 18:10:15.226536   56352 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 
	I1206 18:10:15.227970   56352 kubeadm.go:322] W1206 18:10:04.793535    1381 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1206 18:10:15.228154   56352 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1206 18:10:15.228242   56352 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:10:15.228381   56352 kubeadm.go:322] W1206 18:10:07.234606    1381 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 18:10:15.228500   56352 kubeadm.go:322] W1206 18:10:07.236583    1381 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1206 18:10:15.228523   56352 cni.go:84] Creating CNI manager for ""
	I1206 18:10:15.228532   56352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:10:15.230722   56352 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1206 18:10:15.233361   56352 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 18:10:15.237111   56352 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1206 18:10:15.237134   56352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 18:10:15.252770   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 18:10:15.696233   56352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 18:10:15.696386   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:15.696389   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3 minikube.k8s.io/name=ingress-addon-legacy-099068 minikube.k8s.io/updated_at=2023_12_06T18_10_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:15.703192   56352 ops.go:34] apiserver oom_adj: -16
	I1206 18:10:15.807330   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:15.874344   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:16.451468   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:16.951710   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:17.450927   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:17.951088   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:18.451040   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:18.951661   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:19.451915   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:19.951535   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:20.451267   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:20.951103   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:21.451149   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:21.950979   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:22.450991   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:22.951265   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:23.451650   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:23.951630   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:24.451506   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:24.951569   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:25.451802   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:25.951421   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:26.451404   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:26.950962   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:27.451783   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:27.951196   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:28.451847   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:28.951764   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:29.451154   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:29.951498   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:30.451722   56352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:10:30.518944   56352 kubeadm.go:1088] duration metric: took 14.822625149s to wait for elevateKubeSystemPrivileges.
	I1206 18:10:30.519002   56352 kubeadm.go:406] StartCluster complete in 25.821217684s
	I1206 18:10:30.519026   56352 settings.go:142] acquiring lock: {Name:mk659e0e4749486c04957a41070055ba699e8e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:30.519088   56352 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:10:30.519810   56352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/kubeconfig: {Name:mk369d6bc31165e4100c77201c4dc2786cd89bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:10:30.520085   56352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 18:10:30.520168   56352 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 18:10:30.520285   56352 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-099068"
	I1206 18:10:30.520310   56352 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-099068"
	I1206 18:10:30.520322   56352 config.go:182] Loaded profile config "ingress-addon-legacy-099068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1206 18:10:30.520334   56352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-099068"
	I1206 18:10:30.520315   56352 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-099068"
	I1206 18:10:30.520494   56352 host.go:66] Checking if "ingress-addon-legacy-099068" exists ...
	I1206 18:10:30.520618   56352 kapi.go:59] client config for ingress-addon-legacy-099068: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:10:30.520751   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:10:30.520918   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:10:30.521380   56352 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 18:10:30.549825   56352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:10:30.548740   56352 kapi.go:59] client config for ingress-addon-legacy-099068: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:10:30.549407   56352 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-099068" context rescaled to 1 replicas
	I1206 18:10:30.551846   56352 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:10:30.553645   56352 out.go:177] * Verifying Kubernetes components...
	I1206 18:10:30.552077   56352 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:10:30.552347   56352 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-099068"
	I1206 18:10:30.554998   56352 host.go:66] Checking if "ingress-addon-legacy-099068" exists ...
	I1206 18:10:30.555603   56352 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-099068 --format={{.State.Status}}
	I1206 18:10:30.555836   56352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:10:30.555935   56352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 18:10:30.555978   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:10:30.577330   56352 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 18:10:30.577370   56352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 18:10:30.577414   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:10:30.577436   56352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-099068
	I1206 18:10:30.593360   56352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/ingress-addon-legacy-099068/id_rsa Username:docker}
	I1206 18:10:30.646719   56352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 18:10:30.647127   56352 kapi.go:59] client config for ingress-addon-legacy-099068: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:10:30.647363   56352 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-099068" to be "Ready" ...
	I1206 18:10:30.720413   56352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:10:30.721261   56352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 18:10:31.101186   56352 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 18:10:31.250619   56352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 18:10:31.252176   56352 addons.go:502] enable addons completed in 732.004681ms: enabled=[storage-provisioner default-storageclass]
	I1206 18:10:32.710761   56352 node_ready.go:58] node "ingress-addon-legacy-099068" has status "Ready":"False"
	I1206 18:10:35.209544   56352 node_ready.go:58] node "ingress-addon-legacy-099068" has status "Ready":"False"
	I1206 18:10:35.709604   56352 node_ready.go:49] node "ingress-addon-legacy-099068" has status "Ready":"True"
	I1206 18:10:35.709632   56352 node_ready.go:38] duration metric: took 5.062248036s waiting for node "ingress-addon-legacy-099068" to be "Ready" ...
	I1206 18:10:35.709642   56352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:10:35.716516   56352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:37.723744   56352 pod_ready.go:102] pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:10:30 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1206 18:10:39.723915   56352 pod_ready.go:102] pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-06 18:10:30 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1206 18:10:41.726634   56352 pod_ready.go:102] pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace has status "Ready":"False"
	I1206 18:10:43.727144   56352 pod_ready.go:102] pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace has status "Ready":"False"
	I1206 18:10:44.226684   56352 pod_ready.go:92] pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.226708   56352 pod_ready.go:81] duration metric: took 8.510167892s waiting for pod "coredns-66bff467f8-mmkf6" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.226718   56352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.230758   56352 pod_ready.go:92] pod "etcd-ingress-addon-legacy-099068" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.230781   56352 pod_ready.go:81] duration metric: took 4.056844ms waiting for pod "etcd-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.230796   56352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.234967   56352 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-099068" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.234997   56352 pod_ready.go:81] duration metric: took 4.193301ms waiting for pod "kube-apiserver-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.235009   56352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.239421   56352 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-099068" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.239440   56352 pod_ready.go:81] duration metric: took 4.423704ms waiting for pod "kube-controller-manager-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.239450   56352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9mzd8" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.242921   56352 pod_ready.go:92] pod "kube-proxy-9mzd8" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.242937   56352 pod_ready.go:81] duration metric: took 3.481749ms waiting for pod "kube-proxy-9mzd8" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.242944   56352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.422381   56352 request.go:629] Waited for 179.364263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-099068
	I1206 18:10:44.621784   56352 request.go:629] Waited for 196.381035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-099068
	I1206 18:10:44.624631   56352 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-099068" in "kube-system" namespace has status "Ready":"True"
	I1206 18:10:44.624658   56352 pod_ready.go:81] duration metric: took 381.706924ms waiting for pod "kube-scheduler-ingress-addon-legacy-099068" in "kube-system" namespace to be "Ready" ...
	I1206 18:10:44.624673   56352 pod_ready.go:38] duration metric: took 8.915020717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:10:44.624738   56352 api_server.go:52] waiting for apiserver process to appear ...
	I1206 18:10:44.624806   56352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:10:44.635413   56352 api_server.go:72] duration metric: took 14.083514508s to wait for apiserver process to appear ...
	I1206 18:10:44.635438   56352 api_server.go:88] waiting for apiserver healthz status ...
	I1206 18:10:44.635453   56352 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 18:10:44.639990   56352 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 18:10:44.640882   56352 api_server.go:141] control plane version: v1.18.20
	I1206 18:10:44.640907   56352 api_server.go:131] duration metric: took 5.463487ms to wait for apiserver health ...
	I1206 18:10:44.640916   56352 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 18:10:44.822445   56352 request.go:629] Waited for 181.417869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:10:44.827666   56352 system_pods.go:59] 8 kube-system pods found
	I1206 18:10:44.827697   56352 system_pods.go:61] "coredns-66bff467f8-mmkf6" [40589750-bf81-4547-8b39-6de5336646bd] Running
	I1206 18:10:44.827702   56352 system_pods.go:61] "etcd-ingress-addon-legacy-099068" [9b6c5295-b3bc-4f4a-aed8-e7437afd37a7] Running
	I1206 18:10:44.827706   56352 system_pods.go:61] "kindnet-vwfnw" [845eaf55-1ec3-47a0-a34f-9f68acf95749] Running
	I1206 18:10:44.827710   56352 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-099068" [870d336f-640e-480c-a182-3d78c60be8d9] Running
	I1206 18:10:44.827715   56352 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-099068" [7496ebf8-e5fd-4af7-8f0f-adcb4fabd944] Running
	I1206 18:10:44.827718   56352 system_pods.go:61] "kube-proxy-9mzd8" [e1db0df4-4976-4202-b39e-1ebcfc14087f] Running
	I1206 18:10:44.827722   56352 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-099068" [d20c8027-4c99-44d8-8aa5-95c1e556210a] Running
	I1206 18:10:44.827727   56352 system_pods.go:61] "storage-provisioner" [c7ee42e5-c7bc-4a2e-a637-9fbef575fe9b] Running
	I1206 18:10:44.827733   56352 system_pods.go:74] duration metric: took 186.812298ms to wait for pod list to return data ...
	I1206 18:10:44.827740   56352 default_sa.go:34] waiting for default service account to be created ...
	I1206 18:10:45.022177   56352 request.go:629] Waited for 194.366547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1206 18:10:45.024893   56352 default_sa.go:45] found service account: "default"
	I1206 18:10:45.024923   56352 default_sa.go:55] duration metric: took 197.177929ms for default service account to be created ...
	I1206 18:10:45.024932   56352 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 18:10:45.222416   56352 request.go:629] Waited for 197.405798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:10:45.227775   56352 system_pods.go:86] 8 kube-system pods found
	I1206 18:10:45.227803   56352 system_pods.go:89] "coredns-66bff467f8-mmkf6" [40589750-bf81-4547-8b39-6de5336646bd] Running
	I1206 18:10:45.227808   56352 system_pods.go:89] "etcd-ingress-addon-legacy-099068" [9b6c5295-b3bc-4f4a-aed8-e7437afd37a7] Running
	I1206 18:10:45.227813   56352 system_pods.go:89] "kindnet-vwfnw" [845eaf55-1ec3-47a0-a34f-9f68acf95749] Running
	I1206 18:10:45.227817   56352 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-099068" [870d336f-640e-480c-a182-3d78c60be8d9] Running
	I1206 18:10:45.227824   56352 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-099068" [7496ebf8-e5fd-4af7-8f0f-adcb4fabd944] Running
	I1206 18:10:45.227828   56352 system_pods.go:89] "kube-proxy-9mzd8" [e1db0df4-4976-4202-b39e-1ebcfc14087f] Running
	I1206 18:10:45.227832   56352 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-099068" [d20c8027-4c99-44d8-8aa5-95c1e556210a] Running
	I1206 18:10:45.227836   56352 system_pods.go:89] "storage-provisioner" [c7ee42e5-c7bc-4a2e-a637-9fbef575fe9b] Running
	I1206 18:10:45.227842   56352 system_pods.go:126] duration metric: took 202.904774ms to wait for k8s-apps to be running ...
	I1206 18:10:45.227848   56352 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:10:45.227892   56352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:10:45.238533   56352 system_svc.go:56] duration metric: took 10.663852ms WaitForService to wait for kubelet.
	I1206 18:10:45.238585   56352 kubeadm.go:581] duration metric: took 14.686692056s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:10:45.238613   56352 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:10:45.422075   56352 request.go:629] Waited for 183.381704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1206 18:10:45.424920   56352 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 18:10:45.424943   56352 node_conditions.go:123] node cpu capacity is 8
	I1206 18:10:45.424965   56352 node_conditions.go:105] duration metric: took 186.336863ms to run NodePressure ...
	I1206 18:10:45.424982   56352 start.go:228] waiting for startup goroutines ...
	I1206 18:10:45.424995   56352 start.go:233] waiting for cluster config update ...
	I1206 18:10:45.425010   56352 start.go:242] writing updated cluster config ...
	I1206 18:10:45.425293   56352 ssh_runner.go:195] Run: rm -f paused
	I1206 18:10:45.471039   56352 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1206 18:10:45.473336   56352 out.go:177] 
	W1206 18:10:45.475044   56352 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1206 18:10:45.476518   56352 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1206 18:10:45.477973   56352 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-099068" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 06 18:13:49 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:49.462165056Z" level=info msg="Stopping pod sandbox: aca61bad403d25b088093bdeb4fd331f4df9cebdf91e451d9fa8979d4203b4b3" id=09b13c37-eb43-4b12-8057-000eb8fa52e6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:49 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:49.463076608Z" level=info msg="Stopped pod sandbox: aca61bad403d25b088093bdeb4fd331f4df9cebdf91e451d9fa8979d4203b4b3" id=09b13c37-eb43-4b12-8057-000eb8fa52e6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:49 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:49.936405240Z" level=info msg="Stopping pod sandbox: aca61bad403d25b088093bdeb4fd331f4df9cebdf91e451d9fa8979d4203b4b3" id=b9f59c64-bb11-4ed9-bfa2-a48eda31b6f7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:49 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:49.936461157Z" level=info msg="Stopped pod sandbox (already stopped): aca61bad403d25b088093bdeb4fd331f4df9cebdf91e451d9fa8979d4203b4b3" id=b9f59c64-bb11-4ed9-bfa2-a48eda31b6f7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:50 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:50.683096406Z" level=info msg="Stopping container: 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142 (timeout: 2s)" id=8dabdfa4-1ed7-48a4-8470-d9a384d42c5c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:50 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:50.685715519Z" level=info msg="Stopping container: 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142 (timeout: 2s)" id=d7b13d50-b66c-4752-aaea-f978784ae806 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.692116325Z" level=warning msg="Stopping container 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=8dabdfa4-1ed7-48a4-8470-d9a384d42c5c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:52 ingress-addon-legacy-099068 conmon[3418]: conmon 1c6e5afcd809fddebe49 <ninfo>: container 3430 exited with status 137
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.853992765Z" level=info msg="Stopped container 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142: ingress-nginx/ingress-nginx-controller-7fcf777cb7-92vtw/controller" id=d7b13d50-b66c-4752-aaea-f978784ae806 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.854031208Z" level=info msg="Stopped container 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142: ingress-nginx/ingress-nginx-controller-7fcf777cb7-92vtw/controller" id=8dabdfa4-1ed7-48a4-8470-d9a384d42c5c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.854707721Z" level=info msg="Stopping pod sandbox: 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=d9d52c31-a6ad-4498-bc23-0ba1ab691fd3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.854724715Z" level=info msg="Stopping pod sandbox: 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=3ee79b82-28be-4014-bcff-9352726b89a1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.857483020Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-IL2M3J4QI6SJ2BQ3 - [0:0]\n:KUBE-HP-5UG5L7R77R5ANGFA - [0:0]\n-X KUBE-HP-5UG5L7R77R5ANGFA\n-X KUBE-HP-IL2M3J4QI6SJ2BQ3\nCOMMIT\n"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.858707584Z" level=info msg="Closing host port tcp:80"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.858748010Z" level=info msg="Closing host port tcp:443"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.859683484Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.859703869Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.859847513Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-92vtw Namespace:ingress-nginx ID:8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d UID:85d9c322-396f-4a09-aedd-fef73430e010 NetNS:/var/run/netns/6a18e1ad-2eb0-407f-868e-ec8049bbb761 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.859968335Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-92vtw from CNI network \"kindnet\" (type=ptp)"
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.893996628Z" level=info msg="Stopped pod sandbox: 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=d9d52c31-a6ad-4498-bc23-0ba1ab691fd3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:52 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:52.894148735Z" level=info msg="Stopped pod sandbox (already stopped): 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=3ee79b82-28be-4014-bcff-9352726b89a1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:53 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:53.462997545Z" level=info msg="Stopping container: 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142 (timeout: 2s)" id=3b15a6f8-0308-4a4a-958b-a48553673756 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:53 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:53.465872091Z" level=info msg="Stopped container 1c6e5afcd809fddebe49f74e115a7e2c217ec0b69d14d44c86650944e1a36142: ingress-nginx/ingress-nginx-controller-7fcf777cb7-92vtw/controller" id=3b15a6f8-0308-4a4a-958b-a48553673756 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 06 18:13:53 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:53.466264610Z" level=info msg="Stopping pod sandbox: 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=96b79c42-311b-4354-a613-b06b8f4b9c19 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 06 18:13:53 ingress-addon-legacy-099068 crio[962]: time="2023-12-06 18:13:53.466313618Z" level=info msg="Stopped pod sandbox (already stopped): 8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" id=96b79c42-311b-4354-a613-b06b8f4b9c19 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c263f5397bd63       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            22 seconds ago      Running             hello-world-app           0                   a74dbffc73d40       hello-world-app-5f5d8b66bb-4z7v2
	a34ffa8366cb3       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   099048587edc6       nginx
	1c6e5afcd809f       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   8b0f4549d09c6       ingress-nginx-controller-7fcf777cb7-92vtw
	f03ebdcabf4a6       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   b93d34541c476       ingress-nginx-admission-patch-jpwrq
	5f5dd185b34ef       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   a3970df2d5197       ingress-nginx-admission-create-55n7p
	c360154400598       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   c6ae00f796a84       coredns-66bff467f8-mmkf6
	903541e387b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   ba25683d43d6d       storage-provisioner
	84612e95d18ec       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   8d83d71947cca       kindnet-vwfnw
	9c257ef729fa7       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   b53dbcbd1823d       kube-proxy-9mzd8
	b9cbdc1dfe784       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   1407916342d02       kube-controller-manager-ingress-addon-legacy-099068
	214173a65b307       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   faf8bffb742d1       kube-apiserver-ingress-addon-legacy-099068
	6b44afa532971       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   bbf13157b7e74       kube-scheduler-ingress-addon-legacy-099068
	fe7ec4fac3884       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   5d3b0f071ac19       etcd-ingress-addon-legacy-099068
	
	* 
	* ==> coredns [c3601544005988a39b8dce0410ae4c4aee232ddf2e2257afde24ef24ccdc76f0] <==
	* [INFO] 10.244.0.5:59798 - 63295 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004966355s
	[INFO] 10.244.0.5:44351 - 35789 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004448473s
	[INFO] 10.244.0.5:56395 - 60358 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004443334s
	[INFO] 10.244.0.5:47632 - 36009 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004530318s
	[INFO] 10.244.0.5:50306 - 2180 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00463334s
	[INFO] 10.244.0.5:35252 - 63188 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004316506s
	[INFO] 10.244.0.5:58329 - 18578 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004311012s
	[INFO] 10.244.0.5:59798 - 14195 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004359977s
	[INFO] 10.244.0.5:44085 - 27551 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004685718s
	[INFO] 10.244.0.5:44351 - 8484 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004283621s
	[INFO] 10.244.0.5:59798 - 44753 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004255972s
	[INFO] 10.244.0.5:44085 - 11230 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004117214s
	[INFO] 10.244.0.5:58329 - 59542 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004356464s
	[INFO] 10.244.0.5:50306 - 4377 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004480286s
	[INFO] 10.244.0.5:56395 - 38085 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004605324s
	[INFO] 10.244.0.5:47632 - 51994 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004576466s
	[INFO] 10.244.0.5:59798 - 43729 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049988s
	[INFO] 10.244.0.5:35252 - 56013 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004519286s
	[INFO] 10.244.0.5:44085 - 13623 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037236s
	[INFO] 10.244.0.5:56395 - 19537 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056237s
	[INFO] 10.244.0.5:47632 - 26785 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040205s
	[INFO] 10.244.0.5:58329 - 46817 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036508s
	[INFO] 10.244.0.5:35252 - 33640 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000028825s
	[INFO] 10.244.0.5:44351 - 5017 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000142009s
	[INFO] 10.244.0.5:50306 - 49230 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049993s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-099068
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-099068
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3
	                    minikube.k8s.io/name=ingress-addon-legacy-099068
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T18_10_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:10:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-099068
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:13:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:13:45 +0000   Wed, 06 Dec 2023 18:10:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:13:45 +0000   Wed, 06 Dec 2023 18:10:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:13:45 +0000   Wed, 06 Dec 2023 18:10:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:13:45 +0000   Wed, 06 Dec 2023 18:10:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-099068
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 4abbc649e4454963967b25c637178460
	  System UUID:                94806fac-7398-4b5e-85c9-0b0b246b4f6e
	  Boot ID:                    5f16510a-fcc2-4dea-8318-41aa6150c4de
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-4z7v2                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-mmkf6                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m28s
	  kube-system                 etcd-ingress-addon-legacy-099068                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kindnet-vwfnw                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-099068             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-099068    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-proxy-9mzd8                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-099068             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m51s (x5 over 3m51s)  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x4 over 3m51s)  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x4 over 3m51s)  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s                  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s                  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s                  kubelet     Node ingress-addon-legacy-099068 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m23s                  kubelet     Node ingress-addon-legacy-099068 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004920] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006670] FS-Cache: N-cookie d=00000000349469f5{9p.inode} n=0000000016fce18f
	[  +0.008751] FS-Cache: N-key=[8] '0690130200000000'
	[  +2.633570] FS-Cache: Duplicate cookie detected
	[  +0.004703] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006809] FS-Cache: O-cookie d=00000000ccddd526{9P.session} n=000000005280aac1
	[  +0.007559] FS-Cache: O-key=[10] '34323935363638353031'
	[  +0.005393] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=00000000ccddd526{9P.session} n=00000000201f5d84
	[  +0.008902] FS-Cache: N-key=[10] '34323935363638353031'
	[  +5.075634] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 18:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +1.019846] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +2.015854] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +4.191730] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +8.191424] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[ +16.126837] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[Dec 6 18:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	
	* 
	* ==> etcd [fe7ec4fac38842a3f5238a48b4ac157c60fbe70248ebbe6bb88c776ddb0b3e1d] <==
	* raft2023/12/06 18:10:08 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/06 18:10:08 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-06 18:10:08.333583 W | auth: simple token is not cryptographically signed
	2023-12-06 18:10:08.337800 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-06 18:10:08.338197 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-06 18:10:08.338856 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-06 18:10:08.340259 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-06 18:10:08.340474 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-06 18:10:08.340539 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/06 18:10:08 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/06 18:10:08 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-06 18:10:08.729955 I | etcdserver: published {Name:ingress-addon-legacy-099068 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-06 18:10:08.729976 I | embed: ready to serve client requests
	2023-12-06 18:10:08.730171 I | embed: ready to serve client requests
	2023-12-06 18:10:08.730245 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-06 18:10:08.731826 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-06 18:10:08.731926 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-06 18:10:08.732948 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-06 18:10:08.733160 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  18:13:58 up 56 min,  0 users,  load average: 0.20, 0.63, 0.61
	Linux ingress-addon-legacy-099068 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [84612e95d18ec98bcc84fe39f0c056e574ee3d8e4ee129ac00a7d9eabcbd1607] <==
	* I1206 18:11:53.458347       1 main.go:227] handling current node
	I1206 18:12:03.461625       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:03.461653       1 main.go:227] handling current node
	I1206 18:12:13.472131       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:13.472157       1 main.go:227] handling current node
	I1206 18:12:23.475401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:23.475426       1 main.go:227] handling current node
	I1206 18:12:33.486700       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:33.486724       1 main.go:227] handling current node
	I1206 18:12:43.496054       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:43.496080       1 main.go:227] handling current node
	I1206 18:12:53.499989       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:12:53.500013       1 main.go:227] handling current node
	I1206 18:13:03.511179       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:03.511202       1 main.go:227] handling current node
	I1206 18:13:13.514395       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:13.514425       1 main.go:227] handling current node
	I1206 18:13:23.523361       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:23.523389       1 main.go:227] handling current node
	I1206 18:13:33.529498       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:33.529521       1 main.go:227] handling current node
	I1206 18:13:43.539707       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:43.539737       1 main.go:227] handling current node
	I1206 18:13:53.544251       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1206 18:13:53.544297       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [214173a65b3076ec4431f772988eff1967c7b7e328d266db04e52b3bb953e5d5] <==
	* I1206 18:10:12.416133       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 18:10:12.416257       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1206 18:10:12.416490       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1206 18:10:12.417046       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 18:10:12.417531       1 cache.go:39] Caches are synced for autoregister controller
	I1206 18:10:13.314932       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1206 18:10:13.314962       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1206 18:10:13.319606       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1206 18:10:13.322239       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1206 18:10:13.322258       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1206 18:10:13.671483       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 18:10:13.709682       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1206 18:10:13.831371       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1206 18:10:13.832256       1 controller.go:609] quota admission added evaluator for: endpoints
	I1206 18:10:13.835496       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 18:10:14.667356       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1206 18:10:15.021100       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1206 18:10:15.212712       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1206 18:10:15.442019       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 18:10:30.211388       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1206 18:10:30.387901       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1206 18:10:46.209673       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1206 18:11:13.236818       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1206 18:13:50.441115       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1206 18:13:50.693734       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [b9cbdc1dfe784748b427efa44f511bba9e198822595677bbb42a5bcc8c8e6b7f] <==
	* I1206 18:10:30.393081       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"90b8d433-0617-40bc-8010-5fabeca566fd", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-9mzd8
	I1206 18:10:30.394527       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"de4d97a1-c6c3-470b-b197-b1dc4dec5b67", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-vwfnw
	E1206 18:10:30.408139       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"de4d97a1-c6c3-470b-b197-b1dc4dec5b67", ResourceVersion:"232", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63837483015, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001d31820), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d31840)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d31860), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d31880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d318a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001d318c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001d318e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001d31920)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00020dc70), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00079ca78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00057c150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e3e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00079cac0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1206 18:10:30.483219       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1206 18:10:30.534152       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1206 18:10:30.542510       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"1b0991f4-bd28-48b4-aa69-cca9fc7d3fdb", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1206 18:10:30.600515       1 shared_informer.go:230] Caches are synced for attach detach 
	I1206 18:10:30.604148       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"34cb728a-feab-470e-b5d1-88f44e5c7f5b", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-bgbmx
	I1206 18:10:30.734021       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1206 18:10:30.800470       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 18:10:30.800501       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1206 18:10:30.800594       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1206 18:10:30.825706       1 shared_informer.go:230] Caches are synced for resource quota 
	I1206 18:10:30.901281       1 shared_informer.go:230] Caches are synced for resource quota 
	I1206 18:10:40.287034       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1206 18:10:46.167487       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"202c5b8b-b1ad-4663-8a58-0ba9d3533000", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1206 18:10:46.206710       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"bbf81ffe-6441-4698-bc2f-5f4a4af6a995", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-92vtw
	I1206 18:10:46.219467       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e99e2a46-8355-4eba-863c-461ceebd3324", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-55n7p
	I1206 18:10:46.230741       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"95627566-85de-4af6-ad40-bb917040c447", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-jpwrq
	I1206 18:10:48.608981       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"e99e2a46-8355-4eba-863c-461ceebd3324", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 18:10:48.622119       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"95627566-85de-4af6-ad40-bb917040c447", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1206 18:13:33.734232       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"334a1454-fda9-423c-86db-fcb92e37562d", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1206 18:13:33.739667       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"528b059d-8e24-4325-8f3a-e427d4f13c7a", APIVersion:"apps/v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-4z7v2
	E1206 18:13:55.455083       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-l8zwx" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [9c257ef729fa769f804a37442e46dadc2dc5ca0042b97df68c332646730fa8dd] <==
	* W1206 18:10:31.380520       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1206 18:10:31.387221       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1206 18:10:31.387249       1 server_others.go:186] Using iptables Proxier.
	I1206 18:10:31.387512       1 server.go:583] Version: v1.18.20
	I1206 18:10:31.387953       1 config.go:315] Starting service config controller
	I1206 18:10:31.387971       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1206 18:10:31.387955       1 config.go:133] Starting endpoints config controller
	I1206 18:10:31.387989       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1206 18:10:31.488140       1 shared_informer.go:230] Caches are synced for service config 
	I1206 18:10:31.488170       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [6b44afa532971c9171e84945cfe90b6fb80cad2769f4a4612bba02369356b0eb] <==
	* I1206 18:10:12.424092       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 18:10:12.424186       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1206 18:10:12.424584       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1206 18:10:12.424730       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1206 18:10:12.501902       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:10:12.506761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:10:12.506861       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:10:12.506842       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 18:10:12.507081       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:10:12.506995       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:10:12.507006       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 18:10:12.507085       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1206 18:10:12.507384       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 18:10:12.507864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:10:12.508055       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1206 18:10:12.508225       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 18:10:13.401636       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:10:13.401643       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1206 18:10:13.433235       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1206 18:10:13.481172       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:10:13.483217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1206 18:10:13.490579       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:10:13.524232       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1206 18:10:13.824392       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1206 18:10:30.237075       1 factory.go:503] pod: kube-system/coredns-66bff467f8-bgbmx is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Dec 06 18:13:25 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:25.461936    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:25 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:25.461971    1868 pod_workers.go:191] Error syncing pod 59e0cab2-c247-461c-8f35-2109a910f051 ("kube-ingress-dns-minikube_kube-system(59e0cab2-c247-461c-8f35-2109a910f051)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 06 18:13:33 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:33.743765    1868 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 06 18:13:33 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:33.927506    1868 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-52fd2" (UniqueName: "kubernetes.io/secret/602b57ee-9073-423e-8916-3297e6758866-default-token-52fd2") pod "hello-world-app-5f5d8b66bb-4z7v2" (UID: "602b57ee-9073-423e-8916-3297e6758866")
	Dec 06 18:13:34 ingress-addon-legacy-099068 kubelet[1868]: W1206 18:13:34.141234    1868 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/d344d27313d327346cc96b104d5f14d9ab4915630cdc937a274b69a651a8d34a/crio-a74dbffc73d40ac9a11f20f5b09f0010be2dd305affb5a22bff59f7a873eb602 WatchSource:0}: Error finding container a74dbffc73d40ac9a11f20f5b09f0010be2dd305affb5a22bff59f7a873eb602: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0008ee0e0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 06 18:13:36 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:36.461736    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:36 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:36.461783    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:36 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:36.461832    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:36 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:36.461861    1868 pod_workers.go:191] Error syncing pod 59e0cab2-c247-461c-8f35-2109a910f051 ("kube-ingress-dns-minikube_kube-system(59e0cab2-c247-461c-8f35-2109a910f051)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 06 18:13:47 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:47.461978    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:47 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:47.462038    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:47 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:47.462125    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 06 18:13:47 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:47.462177    1868 pod_workers.go:191] Error syncing pod 59e0cab2-c247-461c-8f35-2109a910f051 ("kube-ingress-dns-minikube_kube-system(59e0cab2-c247-461c-8f35-2109a910f051)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 06 18:13:49 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:49.564210    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-4hlmc" (UniqueName: "kubernetes.io/secret/59e0cab2-c247-461c-8f35-2109a910f051-minikube-ingress-dns-token-4hlmc") pod "59e0cab2-c247-461c-8f35-2109a910f051" (UID: "59e0cab2-c247-461c-8f35-2109a910f051")
	Dec 06 18:13:49 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:49.566094    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59e0cab2-c247-461c-8f35-2109a910f051-minikube-ingress-dns-token-4hlmc" (OuterVolumeSpecName: "minikube-ingress-dns-token-4hlmc") pod "59e0cab2-c247-461c-8f35-2109a910f051" (UID: "59e0cab2-c247-461c-8f35-2109a910f051"). InnerVolumeSpecName "minikube-ingress-dns-token-4hlmc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:13:49 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:49.664608    1868 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-4hlmc" (UniqueName: "kubernetes.io/secret/59e0cab2-c247-461c-8f35-2109a910f051-minikube-ingress-dns-token-4hlmc") on node "ingress-addon-legacy-099068" DevicePath ""
	Dec 06 18:13:50 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:50.684913    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-92vtw.179e50b114e5db2e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-92vtw", UID:"85d9c322-396f-4a09-aedd-fef73430e010", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-099068"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544c57a8b0af2e, ext:215696158406, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544c57a8b0af2e, ext:215696158406, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-92vtw.179e50b114e5db2e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 18:13:50 ingress-addon-legacy-099068 kubelet[1868]: E1206 18:13:50.688201    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-92vtw.179e50b114e5db2e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-92vtw", UID:"85d9c322-396f-4a09-aedd-fef73430e010", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-099068"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1544c57a8b0af2e, ext:215696158406, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1544c57a8d6221e, ext:215698612653, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-92vtw.179e50b114e5db2e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 06 18:13:52 ingress-addon-legacy-099068 kubelet[1868]: W1206 18:13:52.931906    1868 pod_container_deletor.go:77] Container "8b0f4549d09c6e71bc400b2c04a7633fae4cedcc598d4049f4e12c35999fce4d" not found in pod's containers
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.607707    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-webhook-cert") pod "85d9c322-396f-4a09-aedd-fef73430e010" (UID: "85d9c322-396f-4a09-aedd-fef73430e010")
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.607765    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-g2hjk" (UniqueName: "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-ingress-nginx-token-g2hjk") pod "85d9c322-396f-4a09-aedd-fef73430e010" (UID: "85d9c322-396f-4a09-aedd-fef73430e010")
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.609670    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "85d9c322-396f-4a09-aedd-fef73430e010" (UID: "85d9c322-396f-4a09-aedd-fef73430e010"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.609810    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-ingress-nginx-token-g2hjk" (OuterVolumeSpecName: "ingress-nginx-token-g2hjk") pod "85d9c322-396f-4a09-aedd-fef73430e010" (UID: "85d9c322-396f-4a09-aedd-fef73430e010"). InnerVolumeSpecName "ingress-nginx-token-g2hjk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.708065    1868 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-webhook-cert") on node "ingress-addon-legacy-099068" DevicePath ""
	Dec 06 18:13:53 ingress-addon-legacy-099068 kubelet[1868]: I1206 18:13:53.708104    1868 reconciler.go:319] Volume detached for volume "ingress-nginx-token-g2hjk" (UniqueName: "kubernetes.io/secret/85d9c322-396f-4a09-aedd-fef73430e010-ingress-nginx-token-g2hjk") on node "ingress-addon-legacy-099068" DevicePath ""
	
	* 
	* ==> storage-provisioner [903541e387b9cce31bc704342f84cf13f2025c7f845ba94b5b0f34d665c2f8c2] <==
	* I1206 18:10:40.873850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 18:10:40.907603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 18:10:40.907661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1206 18:10:40.913744       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 18:10:40.913873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddbdec66-a461-425a-afd7-723db12d9e95", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-099068_6f2bb93e-4ff1-4e23-b69f-68b3d80ba9bc became leader
	I1206 18:10:40.913886       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-099068_6f2bb93e-4ff1-4e23-b69f-68b3d80ba9bc!
	I1206 18:10:41.014472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-099068_6f2bb93e-4ff1-4e23-b69f-68b3d80ba9bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-099068 -n ingress-addon-legacy-099068
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-099068 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- sh -c "ping -c 1 192.168.58.1": exit status 1 (197.147823ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-5kkfq): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (184.403142ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-k9dh8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-193731
helpers_test.go:235: (dbg) docker inspect multinode-193731:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa",
	        "Created": "2023-12-06T18:19:05.218218055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103170,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:19:05.523769376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:75d04379c0692a7c7580bf47e8a90f896e08db4459e8feaaa815f73da348a8e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/hosts",
	        "LogPath": "/var/lib/docker/containers/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa-json.log",
	        "Name": "/multinode-193731",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-193731:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-193731",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1d63802362d005bf1f4306001c40cc7c835740385d1188d1d8d17b144f4548aa-init/diff:/var/lib/docker/overlay2/ec06e12da6157da3a94af2b1665e4c856c3ea27be6944a5fef4fd2886cc68e28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1d63802362d005bf1f4306001c40cc7c835740385d1188d1d8d17b144f4548aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1d63802362d005bf1f4306001c40cc7c835740385d1188d1d8d17b144f4548aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1d63802362d005bf1f4306001c40cc7c835740385d1188d1d8d17b144f4548aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-193731",
	                "Source": "/var/lib/docker/volumes/multinode-193731/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-193731",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-193731",
	                "name.minikube.sigs.k8s.io": "multinode-193731",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "507d1d24343f1adf509a8c016a5e49f969690454bd1f515dbed3b82addf20def",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/507d1d24343f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-193731": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e4beb39a8487",
	                        "multinode-193731"
	                    ],
	                    "NetworkID": "9a05231ecf41d076829fa17ed429e7977fd8d3f052ac9ca8cce95f3fc255559c",
	                    "EndpointID": "14dcbd8259a87bd5297100440f5b546aa6d6c9994b91f112024770e4cc1dc76d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-193731 -n multinode-193731
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-193731 logs -n 25: (1.237805882s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-323219                           | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-323219 ssh -- ls                    | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-303672                           | mount-start-1-303672 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-323219 ssh -- ls                    | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-323219                           | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	| start   | -p mount-start-2-323219                           | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	| ssh     | mount-start-2-323219 ssh -- ls                    | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-323219                           | mount-start-2-323219 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	| delete  | -p mount-start-1-303672                           | mount-start-1-303672 | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:18 UTC |
	| start   | -p multinode-193731                               | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:18 UTC | 06 Dec 23 18:20 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- apply -f                   | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- rollout                    | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- get pods -o                | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- get pods -o                | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-5kkfq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-k9dh8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-5kkfq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-k9dh8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-5kkfq -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-k9dh8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- get pods -o                | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-5kkfq                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC |                     |
	|         | busybox-5bc68d56bd-5kkfq -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC | 06 Dec 23 18:20 UTC |
	|         | busybox-5bc68d56bd-k9dh8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-193731 -- exec                       | multinode-193731     | jenkins | v1.32.0 | 06 Dec 23 18:20 UTC |                     |
	|         | busybox-5bc68d56bd-k9dh8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:18:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:18:59.235014  102554 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:18:59.235300  102554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:18:59.235310  102554 out.go:309] Setting ErrFile to fd 2...
	I1206 18:18:59.235315  102554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:18:59.235552  102554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:18:59.236212  102554 out.go:303] Setting JSON to false
	I1206 18:18:59.237443  102554 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3688,"bootTime":1701883051,"procs":536,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:18:59.237506  102554 start.go:138] virtualization: kvm guest
	I1206 18:18:59.240103  102554 out.go:177] * [multinode-193731] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:18:59.241633  102554 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:18:59.241573  102554 notify.go:220] Checking for updates...
	I1206 18:18:59.243353  102554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:18:59.245067  102554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:18:59.246781  102554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:18:59.248220  102554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:18:59.249540  102554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:18:59.251113  102554 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:18:59.272669  102554 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:18:59.272812  102554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:18:59.323509  102554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-06 18:18:59.314740775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:18:59.323650  102554 docker.go:295] overlay module found
	I1206 18:18:59.325721  102554 out.go:177] * Using the docker driver based on user configuration
	I1206 18:18:59.327309  102554 start.go:298] selected driver: docker
	I1206 18:18:59.327326  102554 start.go:902] validating driver "docker" against <nil>
	I1206 18:18:59.327342  102554 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:18:59.328152  102554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:18:59.379968  102554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-06 18:18:59.370803735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:18:59.380151  102554 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:18:59.380724  102554 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:18:59.382820  102554 out.go:177] * Using Docker driver with root privileges
	I1206 18:18:59.384405  102554 cni.go:84] Creating CNI manager for ""
	I1206 18:18:59.384424  102554 cni.go:136] 0 nodes found, recommending kindnet
	I1206 18:18:59.384436  102554 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 18:18:59.384466  102554 start_flags.go:323] config:
	{Name:multinode-193731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:18:59.386231  102554 out.go:177] * Starting control plane node multinode-193731 in cluster multinode-193731
	I1206 18:18:59.387542  102554 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:18:59.388921  102554 out.go:177] * Pulling base image ...
	I1206 18:18:59.390133  102554 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:18:59.390156  102554 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:18:59.390171  102554 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:18:59.390180  102554 cache.go:56] Caching tarball of preloaded images
	I1206 18:18:59.390254  102554 preload.go:174] Found /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 18:18:59.390268  102554 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 18:18:59.390608  102554 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json ...
	I1206 18:18:59.390645  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json: {Name:mke4d32dc942868818435659095ee48273c68696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:18:59.406208  102554 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:18:59.406230  102554 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	I1206 18:18:59.406241  102554 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:18:59.406277  102554 start.go:365] acquiring machines lock for multinode-193731: {Name:mk2ca23fae9253b8614679071c60a9e2865b3af9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:18:59.406384  102554 start.go:369] acquired machines lock for "multinode-193731" in 82.491µs
	I1206 18:18:59.406413  102554 start.go:93] Provisioning new machine with config: &{Name:multinode-193731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:18:59.406513  102554 start.go:125] createHost starting for "" (driver="docker")
	I1206 18:18:59.408842  102554 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1206 18:18:59.409075  102554 start.go:159] libmachine.API.Create for "multinode-193731" (driver="docker")
	I1206 18:18:59.409098  102554 client.go:168] LocalClient.Create starting
	I1206 18:18:59.409169  102554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem
	I1206 18:18:59.409207  102554 main.go:141] libmachine: Decoding PEM data...
	I1206 18:18:59.409231  102554 main.go:141] libmachine: Parsing certificate...
	I1206 18:18:59.409304  102554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem
	I1206 18:18:59.409351  102554 main.go:141] libmachine: Decoding PEM data...
	I1206 18:18:59.409377  102554 main.go:141] libmachine: Parsing certificate...
	I1206 18:18:59.409705  102554 cli_runner.go:164] Run: docker network inspect multinode-193731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 18:18:59.425271  102554 cli_runner.go:211] docker network inspect multinode-193731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 18:18:59.425350  102554 network_create.go:281] running [docker network inspect multinode-193731] to gather additional debugging logs...
	I1206 18:18:59.425376  102554 cli_runner.go:164] Run: docker network inspect multinode-193731
	W1206 18:18:59.440881  102554 cli_runner.go:211] docker network inspect multinode-193731 returned with exit code 1
	I1206 18:18:59.440915  102554 network_create.go:284] error running [docker network inspect multinode-193731]: docker network inspect multinode-193731: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-193731 not found
	I1206 18:18:59.440928  102554 network_create.go:286] output of [docker network inspect multinode-193731]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-193731 not found
	
	** /stderr **
	I1206 18:18:59.441072  102554 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:18:59.457257  102554 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ab48e65b3ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b5:68:ee:c7} reservation:<nil>}
	I1206 18:18:59.457701  102554 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022e72e0}
	I1206 18:18:59.457728  102554 network_create.go:124] attempt to create docker network multinode-193731 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1206 18:18:59.457778  102554 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-193731 multinode-193731
	I1206 18:18:59.510938  102554 network_create.go:108] docker network multinode-193731 192.168.58.0/24 created
	I1206 18:18:59.510971  102554 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-193731" container
	I1206 18:18:59.511033  102554 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:18:59.526131  102554 cli_runner.go:164] Run: docker volume create multinode-193731 --label name.minikube.sigs.k8s.io=multinode-193731 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:18:59.543238  102554 oci.go:103] Successfully created a docker volume multinode-193731
	I1206 18:18:59.543339  102554 cli_runner.go:164] Run: docker run --rm --name multinode-193731-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-193731 --entrypoint /usr/bin/test -v multinode-193731:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib
	I1206 18:19:00.051957  102554 oci.go:107] Successfully prepared a docker volume multinode-193731
	I1206 18:19:00.051996  102554 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:19:00.052017  102554 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 18:19:00.052080  102554 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-193731:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 18:19:05.152869  102554 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-193731:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.100745134s)
	I1206 18:19:05.152908  102554 kic.go:203] duration metric: took 5.100889 seconds to extract preloaded images to volume
	W1206 18:19:05.153040  102554 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:19:05.153134  102554 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:19:05.203294  102554 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-193731 --name multinode-193731 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-193731 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-193731 --network multinode-193731 --ip 192.168.58.2 --volume multinode-193731:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:19:05.532290  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Running}}
	I1206 18:19:05.550565  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:05.569289  102554 cli_runner.go:164] Run: docker exec multinode-193731 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:19:05.612168  102554 oci.go:144] the created container "multinode-193731" has a running status.
	I1206 18:19:05.612196  102554 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa...
	I1206 18:19:05.702420  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 18:19:05.702466  102554 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:19:05.724200  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:05.741419  102554 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:19:05.741445  102554 kic_runner.go:114] Args: [docker exec --privileged multinode-193731 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:19:05.808488  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:05.831353  102554 machine.go:88] provisioning docker machine ...
	I1206 18:19:05.831393  102554 ubuntu.go:169] provisioning hostname "multinode-193731"
	I1206 18:19:05.831462  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:05.848836  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:19:05.849248  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1206 18:19:05.849266  102554 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-193731 && echo "multinode-193731" | sudo tee /etc/hostname
	I1206 18:19:05.849918  102554 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49022->127.0.0.1:32847: read: connection reset by peer
	I1206 18:19:08.982456  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-193731
	
	I1206 18:19:08.982533  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:08.999161  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:19:08.999488  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1206 18:19:08.999508  102554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-193731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-193731/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-193731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:19:09.120287  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:19:09.120323  102554 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:19:09.120354  102554 ubuntu.go:177] setting up certificates
	I1206 18:19:09.120366  102554 provision.go:83] configureAuth start
	I1206 18:19:09.120430  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731
	I1206 18:19:09.136168  102554 provision.go:138] copyHostCerts
	I1206 18:19:09.136211  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:19:09.136249  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:19:09.136277  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:19:09.136354  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:19:09.136455  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:19:09.136483  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:19:09.136493  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:19:09.136528  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:19:09.136587  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:19:09.136615  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:19:09.136624  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:19:09.136655  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:19:09.136723  102554 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.multinode-193731 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-193731]
	I1206 18:19:09.453515  102554 provision.go:172] copyRemoteCerts
	I1206 18:19:09.453589  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:19:09.453642  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:09.470626  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:09.560780  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 18:19:09.560842  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:19:09.582339  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 18:19:09.582409  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 18:19:09.604233  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 18:19:09.604314  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 18:19:09.625783  102554 provision.go:86] duration metric: configureAuth took 505.403071ms
	I1206 18:19:09.625825  102554 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:19:09.625993  102554 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:19:09.626086  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:09.642852  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:19:09.643186  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1206 18:19:09.643214  102554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:19:09.852006  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:19:09.852038  102554 machine.go:91] provisioned docker machine in 4.020656449s
	I1206 18:19:09.852049  102554 client.go:171] LocalClient.Create took 10.442944788s
	I1206 18:19:09.852068  102554 start.go:167] duration metric: libmachine.API.Create for "multinode-193731" took 10.44299425s
	I1206 18:19:09.852074  102554 start.go:300] post-start starting for "multinode-193731" (driver="docker")
	I1206 18:19:09.852087  102554 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:19:09.852137  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:19:09.852170  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:09.868749  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:09.956911  102554 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:19:09.959864  102554 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1206 18:19:09.959895  102554 command_runner.go:130] > NAME="Ubuntu"
	I1206 18:19:09.959905  102554 command_runner.go:130] > VERSION_ID="22.04"
	I1206 18:19:09.959915  102554 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1206 18:19:09.959920  102554 command_runner.go:130] > VERSION_CODENAME=jammy
	I1206 18:19:09.959924  102554 command_runner.go:130] > ID=ubuntu
	I1206 18:19:09.959932  102554 command_runner.go:130] > ID_LIKE=debian
	I1206 18:19:09.959936  102554 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1206 18:19:09.959944  102554 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1206 18:19:09.959950  102554 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1206 18:19:09.959960  102554 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1206 18:19:09.959966  102554 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1206 18:19:09.960028  102554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:19:09.960063  102554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:19:09.960074  102554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:19:09.960082  102554 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:19:09.960092  102554 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:19:09.960147  102554 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:19:09.960228  102554 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:19:09.960240  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /etc/ssl/certs/163462.pem
	I1206 18:19:09.960362  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:19:09.968079  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:19:09.992114  102554 start.go:303] post-start completed in 140.000379ms
	I1206 18:19:09.992493  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731
	I1206 18:19:10.009017  102554 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json ...
	I1206 18:19:10.009257  102554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:19:10.009299  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:10.025238  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:10.108737  102554 command_runner.go:130] > 24%!
	(MISSING)I1206 18:19:10.108943  102554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:19:10.112923  102554 command_runner.go:130] > 224G
	I1206 18:19:10.113064  102554 start.go:128] duration metric: createHost completed in 10.706537331s
	I1206 18:19:10.113086  102554 start.go:83] releasing machines lock for "multinode-193731", held for 10.706689148s
	I1206 18:19:10.113149  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731
	I1206 18:19:10.129829  102554 ssh_runner.go:195] Run: cat /version.json
	I1206 18:19:10.129897  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:10.129908  102554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:19:10.129970  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:10.149485  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:10.149543  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:10.235784  102554 command_runner.go:130] > {"iso_version": "v1.32.1-1701387192-17703", "kicbase_version": "v0.0.42-1701685682-17711", "minikube_version": "v1.32.0", "commit": "142948a70f353687cb2ac9a770cd20790e3c3e80"}
	I1206 18:19:10.235928  102554 ssh_runner.go:195] Run: systemctl --version
	I1206 18:19:10.321989  102554 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 18:19:10.324241  102554 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1206 18:19:10.324300  102554 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 18:19:10.324367  102554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:19:10.461139  102554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:19:10.465357  102554 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1206 18:19:10.465385  102554 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1206 18:19:10.465396  102554 command_runner.go:130] > Device: 37h/55d	Inode: 539827      Links: 1
	I1206 18:19:10.465406  102554 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:19:10.465419  102554 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1206 18:19:10.465428  102554 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1206 18:19:10.465440  102554 command_runner.go:130] > Change: 2023-12-06 18:00:33.726480179 +0000
	I1206 18:19:10.465451  102554 command_runner.go:130] >  Birth: 2023-12-06 18:00:33.726480179 +0000
	I1206 18:19:10.465501  102554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:19:10.482849  102554 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:19:10.482931  102554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:19:10.509574  102554 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1206 18:19:10.509617  102554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:19:10.509626  102554 start.go:475] detecting cgroup driver to use...
	I1206 18:19:10.509661  102554 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:19:10.509714  102554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:19:10.523058  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:19:10.532910  102554 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:19:10.532968  102554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:19:10.545027  102554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:19:10.557169  102554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:19:10.637999  102554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:19:10.650706  102554 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1206 18:19:10.718418  102554 docker.go:219] disabling docker service ...
	I1206 18:19:10.718495  102554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:19:10.735523  102554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:19:10.745802  102554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:19:10.826677  102554 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1206 18:19:10.826749  102554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:19:10.906787  102554 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1206 18:19:10.906890  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:19:10.917001  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:19:10.930388  102554 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 18:19:10.931097  102554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 18:19:10.931154  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:19:10.939598  102554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:19:10.939660  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:19:10.948009  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:19:10.956278  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:19:10.964910  102554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:19:10.973135  102554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:19:10.980672  102554 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 18:19:10.980739  102554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:19:10.988141  102554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:19:11.065811  102554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:19:11.179165  102554 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:19:11.179225  102554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:19:11.182711  102554 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 18:19:11.182736  102554 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 18:19:11.182746  102554 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1206 18:19:11.182755  102554 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:19:11.182768  102554 command_runner.go:130] > Access: 2023-12-06 18:19:11.167575704 +0000
	I1206 18:19:11.182777  102554 command_runner.go:130] > Modify: 2023-12-06 18:19:11.167575704 +0000
	I1206 18:19:11.182787  102554 command_runner.go:130] > Change: 2023-12-06 18:19:11.167575704 +0000
	I1206 18:19:11.182800  102554 command_runner.go:130] >  Birth: -
	I1206 18:19:11.182820  102554 start.go:543] Will wait 60s for crictl version
	I1206 18:19:11.182854  102554 ssh_runner.go:195] Run: which crictl
	I1206 18:19:11.186156  102554 command_runner.go:130] > /usr/bin/crictl
	I1206 18:19:11.186223  102554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:19:11.216680  102554 command_runner.go:130] > Version:  0.1.0
	I1206 18:19:11.216700  102554 command_runner.go:130] > RuntimeName:  cri-o
	I1206 18:19:11.216705  102554 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1206 18:19:11.216710  102554 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 18:19:11.218555  102554 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 18:19:11.218633  102554 ssh_runner.go:195] Run: crio --version
	I1206 18:19:11.250735  102554 command_runner.go:130] > crio version 1.24.6
	I1206 18:19:11.250757  102554 command_runner.go:130] > Version:          1.24.6
	I1206 18:19:11.250764  102554 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1206 18:19:11.250769  102554 command_runner.go:130] > GitTreeState:     clean
	I1206 18:19:11.250775  102554 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1206 18:19:11.250780  102554 command_runner.go:130] > GoVersion:        go1.18.2
	I1206 18:19:11.250785  102554 command_runner.go:130] > Compiler:         gc
	I1206 18:19:11.250792  102554 command_runner.go:130] > Platform:         linux/amd64
	I1206 18:19:11.250801  102554 command_runner.go:130] > Linkmode:         dynamic
	I1206 18:19:11.250814  102554 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 18:19:11.250834  102554 command_runner.go:130] > SeccompEnabled:   true
	I1206 18:19:11.250841  102554 command_runner.go:130] > AppArmorEnabled:  false
	I1206 18:19:11.251979  102554 ssh_runner.go:195] Run: crio --version
	I1206 18:19:11.284198  102554 command_runner.go:130] > crio version 1.24.6
	I1206 18:19:11.284222  102554 command_runner.go:130] > Version:          1.24.6
	I1206 18:19:11.284229  102554 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1206 18:19:11.284234  102554 command_runner.go:130] > GitTreeState:     clean
	I1206 18:19:11.284241  102554 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1206 18:19:11.284248  102554 command_runner.go:130] > GoVersion:        go1.18.2
	I1206 18:19:11.284255  102554 command_runner.go:130] > Compiler:         gc
	I1206 18:19:11.284263  102554 command_runner.go:130] > Platform:         linux/amd64
	I1206 18:19:11.284285  102554 command_runner.go:130] > Linkmode:         dynamic
	I1206 18:19:11.284298  102554 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 18:19:11.284309  102554 command_runner.go:130] > SeccompEnabled:   true
	I1206 18:19:11.284317  102554 command_runner.go:130] > AppArmorEnabled:  false
	I1206 18:19:11.286368  102554 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1206 18:19:11.287617  102554 cli_runner.go:164] Run: docker network inspect multinode-193731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:19:11.303193  102554 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1206 18:19:11.306789  102554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:19:11.316689  102554 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:19:11.316737  102554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:19:11.367901  102554 command_runner.go:130] > {
	I1206 18:19:11.367926  102554 command_runner.go:130] >   "images": [
	I1206 18:19:11.367932  102554 command_runner.go:130] >     {
	I1206 18:19:11.367939  102554 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1206 18:19:11.367944  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.367957  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1206 18:19:11.367961  102554 command_runner.go:130] >       ],
	I1206 18:19:11.367965  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.367973  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1206 18:19:11.367980  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1206 18:19:11.367984  102554 command_runner.go:130] >       ],
	I1206 18:19:11.367992  102554 command_runner.go:130] >       "size": "65258016",
	I1206 18:19:11.368004  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.368010  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368022  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368032  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368040  102554 command_runner.go:130] >     },
	I1206 18:19:11.368047  102554 command_runner.go:130] >     {
	I1206 18:19:11.368053  102554 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1206 18:19:11.368059  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368065  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 18:19:11.368071  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368075  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368086  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1206 18:19:11.368098  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1206 18:19:11.368107  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368120  102554 command_runner.go:130] >       "size": "31470524",
	I1206 18:19:11.368130  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.368138  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368142  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368149  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368153  102554 command_runner.go:130] >     },
	I1206 18:19:11.368159  102554 command_runner.go:130] >     {
	I1206 18:19:11.368165  102554 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1206 18:19:11.368171  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368177  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1206 18:19:11.368183  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368190  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368206  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1206 18:19:11.368227  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1206 18:19:11.368237  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368244  102554 command_runner.go:130] >       "size": "53621675",
	I1206 18:19:11.368251  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.368256  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368262  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368281  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368288  102554 command_runner.go:130] >     },
	I1206 18:19:11.368294  102554 command_runner.go:130] >     {
	I1206 18:19:11.368305  102554 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1206 18:19:11.368315  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368327  102554 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1206 18:19:11.368336  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368348  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368357  102554 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1206 18:19:11.368372  102554 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1206 18:19:11.368407  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368420  102554 command_runner.go:130] >       "size": "295456551",
	I1206 18:19:11.368427  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.368436  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.368456  102554 command_runner.go:130] >       },
	I1206 18:19:11.368468  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368478  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368489  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368498  102554 command_runner.go:130] >     },
	I1206 18:19:11.368507  102554 command_runner.go:130] >     {
	I1206 18:19:11.368533  102554 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1206 18:19:11.368543  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368556  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1206 18:19:11.368566  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368576  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368592  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1206 18:19:11.368610  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1206 18:19:11.368617  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368624  102554 command_runner.go:130] >       "size": "127226832",
	I1206 18:19:11.368634  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.368645  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.368655  102554 command_runner.go:130] >       },
	I1206 18:19:11.368665  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368676  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368686  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368694  102554 command_runner.go:130] >     },
	I1206 18:19:11.368701  102554 command_runner.go:130] >     {
	I1206 18:19:11.368708  102554 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1206 18:19:11.368718  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368731  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1206 18:19:11.368741  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368748  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368763  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1206 18:19:11.368779  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1206 18:19:11.368786  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368790  102554 command_runner.go:130] >       "size": "123261750",
	I1206 18:19:11.368800  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.368810  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.368820  102554 command_runner.go:130] >       },
	I1206 18:19:11.368830  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368840  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.368850  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.368859  102554 command_runner.go:130] >     },
	I1206 18:19:11.368867  102554 command_runner.go:130] >     {
	I1206 18:19:11.368873  102554 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1206 18:19:11.368883  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.368896  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1206 18:19:11.368905  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368919  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.368934  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1206 18:19:11.368949  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1206 18:19:11.368957  102554 command_runner.go:130] >       ],
	I1206 18:19:11.368961  102554 command_runner.go:130] >       "size": "74749335",
	I1206 18:19:11.368971  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.368981  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.368992  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.369002  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.369008  102554 command_runner.go:130] >     },
	I1206 18:19:11.369020  102554 command_runner.go:130] >     {
	I1206 18:19:11.369034  102554 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1206 18:19:11.369042  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.369058  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1206 18:19:11.369067  102554 command_runner.go:130] >       ],
	I1206 18:19:11.369078  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.369112  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1206 18:19:11.369127  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1206 18:19:11.369133  102554 command_runner.go:130] >       ],
	I1206 18:19:11.369139  102554 command_runner.go:130] >       "size": "61551410",
	I1206 18:19:11.369148  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.369159  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.369168  102554 command_runner.go:130] >       },
	I1206 18:19:11.369179  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.369188  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.369198  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.369206  102554 command_runner.go:130] >     },
	I1206 18:19:11.369214  102554 command_runner.go:130] >     {
	I1206 18:19:11.369227  102554 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1206 18:19:11.369238  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.369250  102554 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1206 18:19:11.369259  102554 command_runner.go:130] >       ],
	I1206 18:19:11.369269  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.369284  102554 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1206 18:19:11.369298  102554 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1206 18:19:11.369305  102554 command_runner.go:130] >       ],
	I1206 18:19:11.369309  102554 command_runner.go:130] >       "size": "750414",
	I1206 18:19:11.369319  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.369330  102554 command_runner.go:130] >         "value": "65535"
	I1206 18:19:11.369339  102554 command_runner.go:130] >       },
	I1206 18:19:11.369349  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.369359  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.369369  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.369378  102554 command_runner.go:130] >     }
	I1206 18:19:11.369386  102554 command_runner.go:130] >   ]
	I1206 18:19:11.369393  102554 command_runner.go:130] > }
	I1206 18:19:11.369593  102554 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 18:19:11.369609  102554 crio.go:415] Images already preloaded, skipping extraction
	I1206 18:19:11.369665  102554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:19:11.398462  102554 command_runner.go:130] > {
	I1206 18:19:11.398489  102554 command_runner.go:130] >   "images": [
	I1206 18:19:11.398495  102554 command_runner.go:130] >     {
	I1206 18:19:11.398508  102554 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1206 18:19:11.398516  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.398525  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1206 18:19:11.398531  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398538  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.398555  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1206 18:19:11.398569  102554 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1206 18:19:11.398575  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398583  102554 command_runner.go:130] >       "size": "65258016",
	I1206 18:19:11.398593  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.398598  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.398609  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.398619  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.398642  102554 command_runner.go:130] >     },
	I1206 18:19:11.398652  102554 command_runner.go:130] >     {
	I1206 18:19:11.398662  102554 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1206 18:19:11.398669  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.398678  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 18:19:11.398684  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398691  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.398701  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1206 18:19:11.398716  102554 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1206 18:19:11.398732  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398748  102554 command_runner.go:130] >       "size": "31470524",
	I1206 18:19:11.398758  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.398768  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.398778  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.398788  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.398796  102554 command_runner.go:130] >     },
	I1206 18:19:11.398800  102554 command_runner.go:130] >     {
	I1206 18:19:11.398812  102554 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1206 18:19:11.398831  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.398843  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1206 18:19:11.398852  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398863  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.398878  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1206 18:19:11.398891  102554 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1206 18:19:11.398897  102554 command_runner.go:130] >       ],
	I1206 18:19:11.398904  102554 command_runner.go:130] >       "size": "53621675",
	I1206 18:19:11.398915  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.398926  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.398936  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.398946  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.398956  102554 command_runner.go:130] >     },
	I1206 18:19:11.398965  102554 command_runner.go:130] >     {
	I1206 18:19:11.398976  102554 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1206 18:19:11.398982  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.398991  102554 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1206 18:19:11.399001  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399013  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399028  102554 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1206 18:19:11.399043  102554 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1206 18:19:11.399065  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399075  102554 command_runner.go:130] >       "size": "295456551",
	I1206 18:19:11.399082  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.399093  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.399099  102554 command_runner.go:130] >       },
	I1206 18:19:11.399110  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399120  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399130  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399139  102554 command_runner.go:130] >     },
	I1206 18:19:11.399148  102554 command_runner.go:130] >     {
	I1206 18:19:11.399160  102554 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1206 18:19:11.399167  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.399175  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1206 18:19:11.399185  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399196  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399213  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1206 18:19:11.399228  102554 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1206 18:19:11.399238  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399246  102554 command_runner.go:130] >       "size": "127226832",
	I1206 18:19:11.399253  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.399259  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.399269  102554 command_runner.go:130] >       },
	I1206 18:19:11.399279  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399289  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399299  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399308  102554 command_runner.go:130] >     },
	I1206 18:19:11.399317  102554 command_runner.go:130] >     {
	I1206 18:19:11.399330  102554 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1206 18:19:11.399336  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.399344  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1206 18:19:11.399352  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399359  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399375  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1206 18:19:11.399393  102554 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1206 18:19:11.399401  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399408  102554 command_runner.go:130] >       "size": "123261750",
	I1206 18:19:11.399419  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.399425  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.399438  102554 command_runner.go:130] >       },
	I1206 18:19:11.399445  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399450  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399457  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399461  102554 command_runner.go:130] >     },
	I1206 18:19:11.399464  102554 command_runner.go:130] >     {
	I1206 18:19:11.399471  102554 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1206 18:19:11.399480  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.399488  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1206 18:19:11.399496  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399502  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399515  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1206 18:19:11.399528  102554 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1206 18:19:11.399541  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399550  102554 command_runner.go:130] >       "size": "74749335",
	I1206 18:19:11.399560  102554 command_runner.go:130] >       "uid": null,
	I1206 18:19:11.399569  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399575  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399585  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399590  102554 command_runner.go:130] >     },
	I1206 18:19:11.399596  102554 command_runner.go:130] >     {
	I1206 18:19:11.399608  102554 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1206 18:19:11.399616  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.399624  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1206 18:19:11.399634  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399649  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399748  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1206 18:19:11.399771  102554 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1206 18:19:11.399778  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399787  102554 command_runner.go:130] >       "size": "61551410",
	I1206 18:19:11.399798  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.399812  102554 command_runner.go:130] >         "value": "0"
	I1206 18:19:11.399821  102554 command_runner.go:130] >       },
	I1206 18:19:11.399828  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399837  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399842  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399845  102554 command_runner.go:130] >     },
	I1206 18:19:11.399849  102554 command_runner.go:130] >     {
	I1206 18:19:11.399855  102554 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1206 18:19:11.399862  102554 command_runner.go:130] >       "repoTags": [
	I1206 18:19:11.399868  102554 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1206 18:19:11.399874  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399878  102554 command_runner.go:130] >       "repoDigests": [
	I1206 18:19:11.399885  102554 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1206 18:19:11.399894  102554 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1206 18:19:11.399898  102554 command_runner.go:130] >       ],
	I1206 18:19:11.399902  102554 command_runner.go:130] >       "size": "750414",
	I1206 18:19:11.399908  102554 command_runner.go:130] >       "uid": {
	I1206 18:19:11.399913  102554 command_runner.go:130] >         "value": "65535"
	I1206 18:19:11.399920  102554 command_runner.go:130] >       },
	I1206 18:19:11.399928  102554 command_runner.go:130] >       "username": "",
	I1206 18:19:11.399935  102554 command_runner.go:130] >       "spec": null,
	I1206 18:19:11.399939  102554 command_runner.go:130] >       "pinned": false
	I1206 18:19:11.399945  102554 command_runner.go:130] >     }
	I1206 18:19:11.399948  102554 command_runner.go:130] >   ]
	I1206 18:19:11.399952  102554 command_runner.go:130] > }
	I1206 18:19:11.400395  102554 crio.go:496] all images are preloaded for cri-o runtime.
	I1206 18:19:11.400414  102554 cache_images.go:84] Images are preloaded, skipping loading
	I1206 18:19:11.400468  102554 ssh_runner.go:195] Run: crio config
	I1206 18:19:11.437973  102554 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 18:19:11.438003  102554 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 18:19:11.438015  102554 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 18:19:11.438021  102554 command_runner.go:130] > #
	I1206 18:19:11.438039  102554 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 18:19:11.438050  102554 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 18:19:11.438061  102554 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 18:19:11.438077  102554 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 18:19:11.438089  102554 command_runner.go:130] > # reload'.
	I1206 18:19:11.438098  102554 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 18:19:11.438107  102554 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 18:19:11.438120  102554 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 18:19:11.438133  102554 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 18:19:11.438142  102554 command_runner.go:130] > [crio]
	I1206 18:19:11.438152  102554 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 18:19:11.438163  102554 command_runner.go:130] > # containers images, in this directory.
	I1206 18:19:11.438180  102554 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 18:19:11.438195  102554 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 18:19:11.438207  102554 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1206 18:19:11.438220  102554 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 18:19:11.438233  102554 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 18:19:11.438242  102554 command_runner.go:130] > # storage_driver = "vfs"
	I1206 18:19:11.438256  102554 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 18:19:11.438267  102554 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 18:19:11.438277  102554 command_runner.go:130] > # storage_option = [
	I1206 18:19:11.438286  102554 command_runner.go:130] > # ]
	I1206 18:19:11.438301  102554 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 18:19:11.438314  102554 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 18:19:11.438325  102554 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 18:19:11.438335  102554 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 18:19:11.438345  102554 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 18:19:11.438360  102554 command_runner.go:130] > # always happen on a node reboot
	I1206 18:19:11.438372  102554 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 18:19:11.438385  102554 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 18:19:11.438398  102554 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 18:19:11.438417  102554 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 18:19:11.438434  102554 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 18:19:11.438445  102554 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 18:19:11.438463  102554 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 18:19:11.438473  102554 command_runner.go:130] > # internal_wipe = true
	I1206 18:19:11.438485  102554 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 18:19:11.438498  102554 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 18:19:11.438510  102554 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 18:19:11.438526  102554 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 18:19:11.438539  102554 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 18:19:11.438549  102554 command_runner.go:130] > [crio.api]
	I1206 18:19:11.438558  102554 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 18:19:11.438569  102554 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 18:19:11.438581  102554 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 18:19:11.438588  102554 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 18:19:11.438602  102554 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 18:19:11.438614  102554 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 18:19:11.438636  102554 command_runner.go:130] > # stream_port = "0"
	I1206 18:19:11.438648  102554 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 18:19:11.438659  102554 command_runner.go:130] > # stream_enable_tls = false
	I1206 18:19:11.438671  102554 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 18:19:11.438681  102554 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 18:19:11.438691  102554 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 18:19:11.438704  102554 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 18:19:11.438713  102554 command_runner.go:130] > # minutes.
	I1206 18:19:11.438721  102554 command_runner.go:130] > # stream_tls_cert = ""
	I1206 18:19:11.438738  102554 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 18:19:11.438750  102554 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 18:19:11.438758  102554 command_runner.go:130] > # stream_tls_key = ""
	I1206 18:19:11.438770  102554 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 18:19:11.438781  102554 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 18:19:11.438790  102554 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 18:19:11.438798  102554 command_runner.go:130] > # stream_tls_ca = ""
	I1206 18:19:11.438810  102554 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 18:19:11.438821  102554 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 18:19:11.438834  102554 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 18:19:11.438845  102554 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 18:19:11.438966  102554 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 18:19:11.438988  102554 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 18:19:11.438994  102554 command_runner.go:130] > [crio.runtime]
	I1206 18:19:11.439003  102554 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 18:19:11.439019  102554 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 18:19:11.439025  102554 command_runner.go:130] > # "nofile=1024:2048"
	I1206 18:19:11.439034  102554 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 18:19:11.439045  102554 command_runner.go:130] > # default_ulimits = [
	I1206 18:19:11.439056  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439067  102554 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 18:19:11.439074  102554 command_runner.go:130] > # no_pivot = false
	I1206 18:19:11.439084  102554 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 18:19:11.439097  102554 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 18:19:11.439106  102554 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 18:19:11.439119  102554 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 18:19:11.439130  102554 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 18:19:11.439144  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 18:19:11.439155  102554 command_runner.go:130] > # conmon = ""
	I1206 18:19:11.439166  102554 command_runner.go:130] > # Cgroup setting for conmon
	I1206 18:19:11.439180  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 18:19:11.439191  102554 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 18:19:11.439203  102554 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 18:19:11.439214  102554 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 18:19:11.439228  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 18:19:11.439238  102554 command_runner.go:130] > # conmon_env = [
	I1206 18:19:11.439251  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439263  102554 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 18:19:11.439274  102554 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 18:19:11.439287  102554 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 18:19:11.439299  102554 command_runner.go:130] > # default_env = [
	I1206 18:19:11.439306  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439315  102554 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 18:19:11.439325  102554 command_runner.go:130] > # selinux = false
	I1206 18:19:11.439339  102554 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 18:19:11.439353  102554 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 18:19:11.439365  102554 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 18:19:11.439376  102554 command_runner.go:130] > # seccomp_profile = ""
	I1206 18:19:11.439387  102554 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 18:19:11.439396  102554 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 18:19:11.439409  102554 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 18:19:11.439421  102554 command_runner.go:130] > # which might increase security.
	I1206 18:19:11.439431  102554 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1206 18:19:11.439445  102554 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 18:19:11.439465  102554 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 18:19:11.439475  102554 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 18:19:11.439487  102554 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 18:19:11.439499  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:19:11.439510  102554 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 18:19:11.439524  102554 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 18:19:11.439534  102554 command_runner.go:130] > # the cgroup blockio controller.
	I1206 18:19:11.439545  102554 command_runner.go:130] > # blockio_config_file = ""
	I1206 18:19:11.439555  102554 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 18:19:11.439563  102554 command_runner.go:130] > # irqbalance daemon.
	I1206 18:19:11.439570  102554 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 18:19:11.439584  102554 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 18:19:11.439596  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:19:11.439606  102554 command_runner.go:130] > # rdt_config_file = ""
	I1206 18:19:11.439622  102554 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 18:19:11.439632  102554 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 18:19:11.439642  102554 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 18:19:11.439650  102554 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 18:19:11.439662  102554 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 18:19:11.439676  102554 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 18:19:11.439686  102554 command_runner.go:130] > # will be added.
	I1206 18:19:11.439696  102554 command_runner.go:130] > # default_capabilities = [
	I1206 18:19:11.439705  102554 command_runner.go:130] > # 	"CHOWN",
	I1206 18:19:11.439715  102554 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 18:19:11.439724  102554 command_runner.go:130] > # 	"FSETID",
	I1206 18:19:11.439732  102554 command_runner.go:130] > # 	"FOWNER",
	I1206 18:19:11.439736  102554 command_runner.go:130] > # 	"SETGID",
	I1206 18:19:11.439745  102554 command_runner.go:130] > # 	"SETUID",
	I1206 18:19:11.439755  102554 command_runner.go:130] > # 	"SETPCAP",
	I1206 18:19:11.439766  102554 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 18:19:11.439775  102554 command_runner.go:130] > # 	"KILL",
	I1206 18:19:11.439784  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439800  102554 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 18:19:11.439813  102554 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 18:19:11.439821  102554 command_runner.go:130] > # add_inheritable_capabilities = true
	I1206 18:19:11.439829  102554 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 18:19:11.439846  102554 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 18:19:11.439856  102554 command_runner.go:130] > # default_sysctls = [
	I1206 18:19:11.439865  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439876  102554 command_runner.go:130] > # List of devices on the host that a
	I1206 18:19:11.439890  102554 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 18:19:11.439900  102554 command_runner.go:130] > # allowed_devices = [
	I1206 18:19:11.439907  102554 command_runner.go:130] > # 	"/dev/fuse",
	I1206 18:19:11.439910  102554 command_runner.go:130] > # ]
	I1206 18:19:11.439921  102554 command_runner.go:130] > # List of additional devices. specified as
	I1206 18:19:11.440024  102554 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 18:19:11.440043  102554 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 18:19:11.440054  102554 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 18:19:11.440064  102554 command_runner.go:130] > # additional_devices = [
	I1206 18:19:11.440072  102554 command_runner.go:130] > # ]
	I1206 18:19:11.440081  102554 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 18:19:11.440089  102554 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 18:19:11.440099  102554 command_runner.go:130] > # 	"/etc/cdi",
	I1206 18:19:11.440109  102554 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 18:19:11.440121  102554 command_runner.go:130] > # ]
	I1206 18:19:11.440134  102554 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 18:19:11.440148  102554 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 18:19:11.440158  102554 command_runner.go:130] > # Defaults to false.
	I1206 18:19:11.440167  102554 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 18:19:11.440177  102554 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 18:19:11.440191  102554 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 18:19:11.440201  102554 command_runner.go:130] > # hooks_dir = [
	I1206 18:19:11.440212  102554 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 18:19:11.440221  102554 command_runner.go:130] > # ]
	I1206 18:19:11.440234  102554 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 18:19:11.440247  102554 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 18:19:11.440255  102554 command_runner.go:130] > # its default mounts from the following two files:
	I1206 18:19:11.440263  102554 command_runner.go:130] > #
	I1206 18:19:11.440288  102554 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 18:19:11.440299  102554 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 18:19:11.440312  102554 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 18:19:11.440320  102554 command_runner.go:130] > #
	I1206 18:19:11.440335  102554 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 18:19:11.440346  102554 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 18:19:11.440358  102554 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 18:19:11.440370  102554 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 18:19:11.440379  102554 command_runner.go:130] > #
	I1206 18:19:11.440391  102554 command_runner.go:130] > # default_mounts_file = ""
	I1206 18:19:11.440404  102554 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 18:19:11.440418  102554 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 18:19:11.440428  102554 command_runner.go:130] > # pids_limit = 0
	I1206 18:19:11.440437  102554 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 18:19:11.440449  102554 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 18:19:11.440463  102554 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 18:19:11.440479  102554 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 18:19:11.440489  102554 command_runner.go:130] > # log_size_max = -1
	I1206 18:19:11.440503  102554 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 18:19:11.440513  102554 command_runner.go:130] > # log_to_journald = false
	I1206 18:19:11.440523  102554 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 18:19:11.440533  102554 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 18:19:11.440548  102554 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 18:19:11.440560  102554 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 18:19:11.440570  102554 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 18:19:11.440580  102554 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 18:19:11.440592  102554 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 18:19:11.440602  102554 command_runner.go:130] > # read_only = false
	I1206 18:19:11.440611  102554 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 18:19:11.440628  102554 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 18:19:11.440640  102554 command_runner.go:130] > # live configuration reload.
	I1206 18:19:11.440650  102554 command_runner.go:130] > # log_level = "info"
	I1206 18:19:11.440662  102554 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 18:19:11.440673  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:19:11.440683  102554 command_runner.go:130] > # log_filter = ""
	I1206 18:19:11.440693  102554 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 18:19:11.440703  102554 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 18:19:11.440713  102554 command_runner.go:130] > # separated by comma.
	I1206 18:19:11.440724  102554 command_runner.go:130] > # uid_mappings = ""
	I1206 18:19:11.440737  102554 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 18:19:11.440755  102554 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 18:19:11.440766  102554 command_runner.go:130] > # separated by comma.
	I1206 18:19:11.440772  102554 command_runner.go:130] > # gid_mappings = ""
	I1206 18:19:11.440781  102554 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 18:19:11.440794  102554 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 18:19:11.440813  102554 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 18:19:11.440824  102554 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 18:19:11.440837  102554 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 18:19:11.440850  102554 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 18:19:11.440862  102554 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 18:19:11.440868  102554 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 18:19:11.440878  102554 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 18:19:11.440891  102554 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 18:19:11.440904  102554 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 18:19:11.440914  102554 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 18:19:11.440927  102554 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 18:19:11.440968  102554 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 18:19:11.440981  102554 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 18:19:11.440996  102554 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 18:19:11.441008  102554 command_runner.go:130] > # drop_infra_ctr = true
	I1206 18:19:11.441019  102554 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 18:19:11.441028  102554 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 18:19:11.441040  102554 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 18:19:11.441045  102554 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 18:19:11.441054  102554 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 18:19:11.441067  102554 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 18:19:11.441075  102554 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 18:19:11.441089  102554 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 18:19:11.441096  102554 command_runner.go:130] > # pinns_path = ""
	I1206 18:19:11.441109  102554 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 18:19:11.441120  102554 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 18:19:11.441131  102554 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 18:19:11.441138  102554 command_runner.go:130] > # default_runtime = "runc"
	I1206 18:19:11.441151  102554 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 18:19:11.441171  102554 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 18:19:11.441190  102554 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 18:19:11.441203  102554 command_runner.go:130] > # creation as a file is not desired either.
	I1206 18:19:11.441215  102554 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 18:19:11.441226  102554 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 18:19:11.441235  102554 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 18:19:11.441244  102554 command_runner.go:130] > # ]
	I1206 18:19:11.441255  102554 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 18:19:11.441268  102554 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 18:19:11.441280  102554 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 18:19:11.441290  102554 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 18:19:11.441297  102554 command_runner.go:130] > #
	I1206 18:19:11.441302  102554 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 18:19:11.441310  102554 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 18:19:11.441320  102554 command_runner.go:130] > #  runtime_type = "oci"
	I1206 18:19:11.441329  102554 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 18:19:11.441341  102554 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 18:19:11.441348  102554 command_runner.go:130] > #  allowed_annotations = []
	I1206 18:19:11.441357  102554 command_runner.go:130] > # Where:
	I1206 18:19:11.441366  102554 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 18:19:11.441383  102554 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 18:19:11.441393  102554 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 18:19:11.441403  102554 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 18:19:11.441414  102554 command_runner.go:130] > #   in $PATH.
	I1206 18:19:11.441425  102554 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 18:19:11.441436  102554 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 18:19:11.441449  102554 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 18:19:11.441459  102554 command_runner.go:130] > #   state.
	I1206 18:19:11.441468  102554 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 18:19:11.441474  102554 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 18:19:11.441483  102554 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 18:19:11.441493  102554 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 18:19:11.441507  102554 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 18:19:11.441521  102554 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 18:19:11.441532  102554 command_runner.go:130] > #   The currently recognized values are:
	I1206 18:19:11.441543  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 18:19:11.441555  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 18:19:11.441565  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 18:19:11.441579  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 18:19:11.441595  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 18:19:11.441609  102554 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 18:19:11.441624  102554 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 18:19:11.441637  102554 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 18:19:11.441644  102554 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 18:19:11.441649  102554 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 18:19:11.441658  102554 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1206 18:19:11.441669  102554 command_runner.go:130] > runtime_type = "oci"
	I1206 18:19:11.441677  102554 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 18:19:11.441687  102554 command_runner.go:130] > runtime_config_path = ""
	I1206 18:19:11.441694  102554 command_runner.go:130] > monitor_path = ""
	I1206 18:19:11.441703  102554 command_runner.go:130] > monitor_cgroup = ""
	I1206 18:19:11.441711  102554 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 18:19:11.441821  102554 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 18:19:11.441838  102554 command_runner.go:130] > # running containers
	I1206 18:19:11.441846  102554 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 18:19:11.441857  102554 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 18:19:11.441871  102554 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 18:19:11.441884  102554 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 18:19:11.441893  102554 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 18:19:11.441904  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 18:19:11.441909  102554 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 18:19:11.441916  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 18:19:11.441930  102554 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 18:19:11.441943  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 18:19:11.441954  102554 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 18:19:11.441967  102554 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 18:19:11.441980  102554 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 18:19:11.441993  102554 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 18:19:11.442003  102554 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 18:19:11.442016  102554 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 18:19:11.442032  102554 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 18:19:11.442047  102554 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 18:19:11.442059  102554 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 18:19:11.442075  102554 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 18:19:11.442088  102554 command_runner.go:130] > # Example:
	I1206 18:19:11.442101  102554 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 18:19:11.442113  102554 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 18:19:11.442124  102554 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 18:19:11.442133  102554 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 18:19:11.442143  102554 command_runner.go:130] > # cpuset = 0
	I1206 18:19:11.442150  102554 command_runner.go:130] > # cpushares = "0-1"
	I1206 18:19:11.442160  102554 command_runner.go:130] > # Where:
	I1206 18:19:11.442168  102554 command_runner.go:130] > # The workload name is workload-type.
	I1206 18:19:11.442184  102554 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 18:19:11.442196  102554 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 18:19:11.442208  102554 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 18:19:11.442223  102554 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 18:19:11.442234  102554 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 18:19:11.442240  102554 command_runner.go:130] > # 
	I1206 18:19:11.442246  102554 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 18:19:11.442252  102554 command_runner.go:130] > #
	I1206 18:19:11.442258  102554 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 18:19:11.442268  102554 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 18:19:11.442282  102554 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 18:19:11.442297  102554 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 18:19:11.442310  102554 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 18:19:11.442319  102554 command_runner.go:130] > [crio.image]
	I1206 18:19:11.442329  102554 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 18:19:11.442342  102554 command_runner.go:130] > # default_transport = "docker://"
	I1206 18:19:11.442351  102554 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 18:19:11.442359  102554 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 18:19:11.442364  102554 command_runner.go:130] > # global_auth_file = ""
	I1206 18:19:11.442371  102554 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 18:19:11.442376  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:19:11.442384  102554 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 18:19:11.442390  102554 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 18:19:11.442398  102554 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 18:19:11.442403  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:19:11.442409  102554 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 18:19:11.442419  102554 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 18:19:11.442437  102554 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 18:19:11.442451  102554 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 18:19:11.442464  102554 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 18:19:11.442474  102554 command_runner.go:130] > # pause_command = "/pause"
	I1206 18:19:11.442487  102554 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 18:19:11.442496  102554 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 18:19:11.442503  102554 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 18:19:11.442511  102554 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 18:19:11.442518  102554 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 18:19:11.442523  102554 command_runner.go:130] > # signature_policy = ""
	I1206 18:19:11.442536  102554 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 18:19:11.442545  102554 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 18:19:11.442549  102554 command_runner.go:130] > # changing them here.
	I1206 18:19:11.442555  102554 command_runner.go:130] > # insecure_registries = [
	I1206 18:19:11.442559  102554 command_runner.go:130] > # ]
	I1206 18:19:11.442567  102554 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 18:19:11.442573  102554 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 18:19:11.442579  102554 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 18:19:11.442587  102554 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 18:19:11.442593  102554 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 18:19:11.442600  102554 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 18:19:11.442606  102554 command_runner.go:130] > # CNI plugins.
	I1206 18:19:11.442610  102554 command_runner.go:130] > [crio.network]
	I1206 18:19:11.442622  102554 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 18:19:11.442631  102554 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 18:19:11.442635  102554 command_runner.go:130] > # cni_default_network = ""
	I1206 18:19:11.442647  102554 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 18:19:11.442659  102554 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 18:19:11.442669  102554 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 18:19:11.442675  102554 command_runner.go:130] > # plugin_dirs = [
	I1206 18:19:11.442679  102554 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 18:19:11.442685  102554 command_runner.go:130] > # ]
	I1206 18:19:11.442691  102554 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 18:19:11.442697  102554 command_runner.go:130] > [crio.metrics]
	I1206 18:19:11.442702  102554 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 18:19:11.442709  102554 command_runner.go:130] > # enable_metrics = false
	I1206 18:19:11.442717  102554 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 18:19:11.442724  102554 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 18:19:11.442730  102554 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 18:19:11.442738  102554 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 18:19:11.442745  102554 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 18:19:11.442751  102554 command_runner.go:130] > # metrics_collectors = [
	I1206 18:19:11.442755  102554 command_runner.go:130] > # 	"operations",
	I1206 18:19:11.442762  102554 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 18:19:11.442766  102554 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 18:19:11.442772  102554 command_runner.go:130] > # 	"operations_errors",
	I1206 18:19:11.442777  102554 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 18:19:11.442783  102554 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 18:19:11.442788  102554 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 18:19:11.442794  102554 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 18:19:11.442798  102554 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 18:19:11.442803  102554 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 18:19:11.442809  102554 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 18:19:11.442813  102554 command_runner.go:130] > # 	"containers_oom_total",
	I1206 18:19:11.442821  102554 command_runner.go:130] > # 	"containers_oom",
	I1206 18:19:11.442827  102554 command_runner.go:130] > # 	"processes_defunct",
	I1206 18:19:11.442831  102554 command_runner.go:130] > # 	"operations_total",
	I1206 18:19:11.442838  102554 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 18:19:11.442843  102554 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 18:19:11.442849  102554 command_runner.go:130] > # 	"operations_errors_total",
	I1206 18:19:11.442853  102554 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 18:19:11.442860  102554 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 18:19:11.442865  102554 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 18:19:11.442872  102554 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 18:19:11.442876  102554 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 18:19:11.442883  102554 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 18:19:11.442886  102554 command_runner.go:130] > # ]
	I1206 18:19:11.442891  102554 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 18:19:11.442897  102554 command_runner.go:130] > # metrics_port = 9090
	I1206 18:19:11.442902  102554 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 18:19:11.442909  102554 command_runner.go:130] > # metrics_socket = ""
	I1206 18:19:11.442914  102554 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 18:19:11.442925  102554 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 18:19:11.442933  102554 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 18:19:11.442940  102554 command_runner.go:130] > # certificate on any modification event.
	I1206 18:19:11.442944  102554 command_runner.go:130] > # metrics_cert = ""
	I1206 18:19:11.442951  102554 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 18:19:11.442956  102554 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 18:19:11.442962  102554 command_runner.go:130] > # metrics_key = ""
	I1206 18:19:11.442968  102554 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 18:19:11.442974  102554 command_runner.go:130] > [crio.tracing]
	I1206 18:19:11.442983  102554 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 18:19:11.442989  102554 command_runner.go:130] > # enable_tracing = false
	I1206 18:19:11.442994  102554 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 18:19:11.443001  102554 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 18:19:11.443006  102554 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 18:19:11.443013  102554 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 18:19:11.443019  102554 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 18:19:11.443026  102554 command_runner.go:130] > [crio.stats]
	I1206 18:19:11.443036  102554 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 18:19:11.443051  102554 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 18:19:11.443060  102554 command_runner.go:130] > # stats_collection_period = 0
	I1206 18:19:11.443085  102554 command_runner.go:130] ! time="2023-12-06 18:19:11.434948208Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1206 18:19:11.443097  102554 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 18:19:11.443173  102554 cni.go:84] Creating CNI manager for ""
	I1206 18:19:11.443184  102554 cni.go:136] 1 nodes found, recommending kindnet
	I1206 18:19:11.443198  102554 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:19:11.443230  102554 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-193731 NodeName:multinode-193731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:19:11.443357  102554 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-193731"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:19:11.443416  102554 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-193731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:19:11.443465  102554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 18:19:11.451581  102554 command_runner.go:130] > kubeadm
	I1206 18:19:11.451606  102554 command_runner.go:130] > kubectl
	I1206 18:19:11.451613  102554 command_runner.go:130] > kubelet
	I1206 18:19:11.451635  102554 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:19:11.451683  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 18:19:11.460133  102554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1206 18:19:11.475643  102554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 18:19:11.491115  102554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1206 18:19:11.506453  102554 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1206 18:19:11.509523  102554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:19:11.518886  102554 certs.go:56] Setting up /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731 for IP: 192.168.58.2
	I1206 18:19:11.518915  102554 certs.go:190] acquiring lock for shared ca certs: {Name:mk88da27ec99c860f0c2ad3f4fab21b90cf40c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:11.519054  102554 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key
	I1206 18:19:11.519093  102554 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key
	I1206 18:19:11.519149  102554 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key
	I1206 18:19:11.519161  102554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt with IP's: []
	I1206 18:19:11.601626  102554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt ...
	I1206 18:19:11.601664  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt: {Name:mka682065d452613b68737fb49f0b7b7edd46a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:11.601865  102554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key ...
	I1206 18:19:11.601883  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key: {Name:mkbb56d16f7268a54d04a084b07e1206e76cf575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:11.601985  102554 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key.cee25041
	I1206 18:19:11.602004  102554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1206 18:19:11.924949  102554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt.cee25041 ...
	I1206 18:19:11.924989  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt.cee25041: {Name:mk5c093c8e6f7dcb399f9ad98f98151506765369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:11.925194  102554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key.cee25041 ...
	I1206 18:19:11.925215  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key.cee25041: {Name:mkcf79c93a64be82560a9ed2cf5200bb6e494860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:11.925318  102554 certs.go:337] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt
	I1206 18:19:11.925413  102554 certs.go:341] copying /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key
	I1206 18:19:11.925507  102554 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.key
	I1206 18:19:11.925539  102554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.crt with IP's: []
	I1206 18:19:12.027217  102554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.crt ...
	I1206 18:19:12.027257  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.crt: {Name:mk4c2abf28f8ef186885158aeb7f22ff617b2988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:12.027475  102554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.key ...
	I1206 18:19:12.027499  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.key: {Name:mke79c0994f8c120ea7f12bc65f63a35835cc75f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:12.027593  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 18:19:12.027614  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 18:19:12.027624  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 18:19:12.027641  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 18:19:12.027653  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 18:19:12.027665  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 18:19:12.027677  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 18:19:12.027691  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 18:19:12.027744  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem (1338 bytes)
	W1206 18:19:12.027779  102554 certs.go:433] ignoring /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346_empty.pem, impossibly tiny 0 bytes
	I1206 18:19:12.027791  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 18:19:12.027816  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem (1078 bytes)
	I1206 18:19:12.027840  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:19:12.027863  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem (1675 bytes)
	I1206 18:19:12.027900  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:19:12.027926  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:19:12.027940  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem -> /usr/share/ca-certificates/16346.pem
	I1206 18:19:12.027952  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /usr/share/ca-certificates/163462.pem
	I1206 18:19:12.028575  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1206 18:19:12.049909  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 18:19:12.070153  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 18:19:12.090753  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 18:19:12.110700  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:19:12.130996  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 18:19:12.151209  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:19:12.171495  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:19:12.191950  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:19:12.212505  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem --> /usr/share/ca-certificates/16346.pem (1338 bytes)
	I1206 18:19:12.233031  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /usr/share/ca-certificates/163462.pem (1708 bytes)
	I1206 18:19:12.253620  102554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 18:19:12.268593  102554 ssh_runner.go:195] Run: openssl version
	I1206 18:19:12.273318  102554 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1206 18:19:12.273394  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:19:12.281202  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:19:12.284201  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:19:12.284242  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:19:12.284357  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:19:12.290358  102554 command_runner.go:130] > b5213941
	I1206 18:19:12.290426  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:19:12.298721  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16346.pem && ln -fs /usr/share/ca-certificates/16346.pem /etc/ssl/certs/16346.pem"
	I1206 18:19:12.306769  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16346.pem
	I1206 18:19:12.309863  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:06 /usr/share/ca-certificates/16346.pem
	I1206 18:19:12.309885  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:06 /usr/share/ca-certificates/16346.pem
	I1206 18:19:12.309922  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16346.pem
	I1206 18:19:12.315590  102554 command_runner.go:130] > 51391683
	I1206 18:19:12.315757  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16346.pem /etc/ssl/certs/51391683.0"
	I1206 18:19:12.323668  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163462.pem && ln -fs /usr/share/ca-certificates/163462.pem /etc/ssl/certs/163462.pem"
	I1206 18:19:12.331855  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163462.pem
	I1206 18:19:12.334969  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:06 /usr/share/ca-certificates/163462.pem
	I1206 18:19:12.335002  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:06 /usr/share/ca-certificates/163462.pem
	I1206 18:19:12.335049  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163462.pem
	I1206 18:19:12.341341  102554 command_runner.go:130] > 3ec20f2e
	I1206 18:19:12.341417  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163462.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 18:19:12.350266  102554 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:19:12.353319  102554 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:19:12.353364  102554 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:19:12.353401  102554 kubeadm.go:404] StartCluster: {Name:multinode-193731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:19:12.353483  102554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 18:19:12.353526  102554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 18:19:12.385831  102554 cri.go:89] found id: ""
	I1206 18:19:12.385907  102554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 18:19:12.393737  102554 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1206 18:19:12.393774  102554 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1206 18:19:12.393787  102554 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1206 18:19:12.393864  102554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 18:19:12.401592  102554 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1206 18:19:12.401660  102554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 18:19:12.409023  102554 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1206 18:19:12.409054  102554 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1206 18:19:12.409062  102554 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1206 18:19:12.409071  102554 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:19:12.409100  102554 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 18:19:12.409136  102554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 18:19:12.451886  102554 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1206 18:19:12.451923  102554 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1206 18:19:12.451977  102554 kubeadm.go:322] [preflight] Running pre-flight checks
	I1206 18:19:12.451986  102554 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 18:19:12.485309  102554 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:19:12.485334  102554 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:19:12.485396  102554 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1206 18:19:12.485406  102554 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1206 18:19:12.485482  102554 kubeadm.go:322] OS: Linux
	I1206 18:19:12.485515  102554 command_runner.go:130] > OS: Linux
	I1206 18:19:12.485581  102554 kubeadm.go:322] CGROUPS_CPU: enabled
	I1206 18:19:12.485593  102554 command_runner.go:130] > CGROUPS_CPU: enabled
	I1206 18:19:12.485661  102554 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1206 18:19:12.485672  102554 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1206 18:19:12.485743  102554 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1206 18:19:12.485753  102554 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1206 18:19:12.485816  102554 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1206 18:19:12.485827  102554 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1206 18:19:12.485892  102554 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1206 18:19:12.485902  102554 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1206 18:19:12.485970  102554 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1206 18:19:12.485980  102554 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1206 18:19:12.486040  102554 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1206 18:19:12.486050  102554 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1206 18:19:12.486119  102554 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1206 18:19:12.486129  102554 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1206 18:19:12.486201  102554 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1206 18:19:12.486210  102554 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1206 18:19:12.546022  102554 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:19:12.546061  102554 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 18:19:12.546184  102554 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:19:12.546213  102554 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 18:19:12.546378  102554 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:19:12.546383  102554 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1206 18:19:12.735718  102554 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:19:12.739061  102554 out.go:204]   - Generating certificates and keys ...
	I1206 18:19:12.735822  102554 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 18:19:12.739223  102554 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1206 18:19:12.739241  102554 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1206 18:19:12.739307  102554 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1206 18:19:12.739315  102554 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1206 18:19:12.920554  102554 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:19:12.920599  102554 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 18:19:13.186294  102554 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:19:13.186326  102554 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1206 18:19:13.768894  102554 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1206 18:19:13.768920  102554 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1206 18:19:13.891240  102554 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1206 18:19:13.891264  102554 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1206 18:19:14.061271  102554 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1206 18:19:14.061308  102554 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1206 18:19:14.061472  102554 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-193731] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1206 18:19:14.061484  102554 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-193731] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1206 18:19:14.266112  102554 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1206 18:19:14.266153  102554 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1206 18:19:14.266286  102554 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-193731] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1206 18:19:14.266300  102554 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-193731] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1206 18:19:14.628089  102554 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:19:14.628122  102554 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 18:19:14.725335  102554 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:19:14.725371  102554 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 18:19:15.182429  102554 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1206 18:19:15.182460  102554 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1206 18:19:15.182564  102554 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:19:15.182602  102554 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 18:19:15.289498  102554 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:19:15.289526  102554 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 18:19:15.571877  102554 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:19:15.571908  102554 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 18:19:15.677291  102554 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:19:15.677314  102554 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 18:19:15.820841  102554 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:19:15.820873  102554 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 18:19:15.821278  102554 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:19:15.821303  102554 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 18:19:15.823573  102554 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:19:15.825709  102554 out.go:204]   - Booting up control plane ...
	I1206 18:19:15.823665  102554 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 18:19:15.825802  102554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:19:15.825815  102554 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 18:19:15.825932  102554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:19:15.825938  102554 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 18:19:15.826439  102554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:19:15.826455  102554 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 18:19:15.835756  102554 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:19:15.835782  102554 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:19:15.836814  102554 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:19:15.836831  102554 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:19:15.836897  102554 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1206 18:19:15.836909  102554 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 18:19:15.921426  102554 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:19:15.921474  102554 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1206 18:19:20.423117  102554 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501731 seconds
	I1206 18:19:20.423170  102554 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.501731 seconds
	I1206 18:19:20.423333  102554 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:19:20.423347  102554 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 18:19:20.437715  102554 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:19:20.437748  102554 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 18:19:20.957157  102554 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:19:20.957195  102554 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1206 18:19:20.957415  102554 kubeadm.go:322] [mark-control-plane] Marking the node multinode-193731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 18:19:20.957426  102554 command_runner.go:130] > [mark-control-plane] Marking the node multinode-193731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 18:19:21.466707  102554 kubeadm.go:322] [bootstrap-token] Using token: uaes1t.3nrsrs7ejdm4633o
	I1206 18:19:21.466761  102554 command_runner.go:130] > [bootstrap-token] Using token: uaes1t.3nrsrs7ejdm4633o
	I1206 18:19:21.468607  102554 out.go:204]   - Configuring RBAC rules ...
	I1206 18:19:21.468767  102554 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:19:21.468791  102554 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 18:19:21.472442  102554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:19:21.472473  102554 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 18:19:21.479274  102554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:19:21.479295  102554 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 18:19:21.483259  102554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:19:21.483281  102554 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 18:19:21.486438  102554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:19:21.486466  102554 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 18:19:21.489244  102554 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:19:21.489264  102554 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 18:19:21.499621  102554 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:19:21.499664  102554 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 18:19:21.721805  102554 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1206 18:19:21.721833  102554 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1206 18:19:21.906290  102554 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1206 18:19:21.906336  102554 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1206 18:19:21.907418  102554 kubeadm.go:322] 
	I1206 18:19:21.907523  102554 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1206 18:19:21.907548  102554 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1206 18:19:21.907560  102554 kubeadm.go:322] 
	I1206 18:19:21.907675  102554 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1206 18:19:21.907721  102554 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1206 18:19:21.907730  102554 kubeadm.go:322] 
	I1206 18:19:21.907765  102554 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1206 18:19:21.907776  102554 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1206 18:19:21.907854  102554 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:19:21.907869  102554 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 18:19:21.907962  102554 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:19:21.907977  102554 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 18:19:21.907983  102554 kubeadm.go:322] 
	I1206 18:19:21.908054  102554 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1206 18:19:21.908072  102554 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1206 18:19:21.908079  102554 kubeadm.go:322] 
	I1206 18:19:21.908143  102554 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 18:19:21.908154  102554 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 18:19:21.908164  102554 kubeadm.go:322] 
	I1206 18:19:21.908237  102554 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1206 18:19:21.908248  102554 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1206 18:19:21.908367  102554 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:19:21.908382  102554 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 18:19:21.908471  102554 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:19:21.908481  102554 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 18:19:21.908488  102554 kubeadm.go:322] 
	I1206 18:19:21.908600  102554 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:19:21.908612  102554 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1206 18:19:21.908698  102554 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1206 18:19:21.908710  102554 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1206 18:19:21.908716  102554 kubeadm.go:322] 
	I1206 18:19:21.908828  102554 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uaes1t.3nrsrs7ejdm4633o \
	I1206 18:19:21.908840  102554 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token uaes1t.3nrsrs7ejdm4633o \
	I1206 18:19:21.908976  102554 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 \
	I1206 18:19:21.908986  102554 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 \
	I1206 18:19:21.909013  102554 kubeadm.go:322] 	--control-plane 
	I1206 18:19:21.909024  102554 command_runner.go:130] > 	--control-plane 
	I1206 18:19:21.909030  102554 kubeadm.go:322] 
	I1206 18:19:21.909148  102554 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:19:21.909160  102554 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1206 18:19:21.909166  102554 kubeadm.go:322] 
	I1206 18:19:21.909273  102554 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uaes1t.3nrsrs7ejdm4633o \
	I1206 18:19:21.909284  102554 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uaes1t.3nrsrs7ejdm4633o \
	I1206 18:19:21.909419  102554 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 
	I1206 18:19:21.909433  102554 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 
	I1206 18:19:21.911452  102554 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1206 18:19:21.911474  102554 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1206 18:19:21.911590  102554 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:19:21.911604  102554 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:19:21.911623  102554 cni.go:84] Creating CNI manager for ""
	I1206 18:19:21.911634  102554 cni.go:136] 1 nodes found, recommending kindnet
	I1206 18:19:21.913529  102554 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1206 18:19:21.915197  102554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 18:19:21.920040  102554 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 18:19:21.920088  102554 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1206 18:19:21.920103  102554 command_runner.go:130] > Device: 37h/55d	Inode: 547375      Links: 1
	I1206 18:19:21.920119  102554 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:19:21.920129  102554 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1206 18:19:21.920141  102554 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1206 18:19:21.920150  102554 command_runner.go:130] > Change: 2023-12-06 18:00:34.126507801 +0000
	I1206 18:19:21.920162  102554 command_runner.go:130] >  Birth: 2023-12-06 18:00:34.102506143 +0000
	I1206 18:19:21.920232  102554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 18:19:21.920247  102554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 18:19:21.936900  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 18:19:22.556907  102554 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1206 18:19:22.561971  102554 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1206 18:19:22.570930  102554 command_runner.go:130] > serviceaccount/kindnet created
	I1206 18:19:22.580066  102554 command_runner.go:130] > daemonset.apps/kindnet created
	I1206 18:19:22.584279  102554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 18:19:22.584341  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:22.584384  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3 minikube.k8s.io/name=multinode-193731 minikube.k8s.io/updated_at=2023_12_06T18_19_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:22.657329  102554 command_runner.go:130] > node/multinode-193731 labeled
	I1206 18:19:22.659943  102554 command_runner.go:130] > -16
	I1206 18:19:22.659980  102554 ops.go:34] apiserver oom_adj: -16
	I1206 18:19:22.660012  102554 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1206 18:19:22.660097  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:22.731249  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:22.731339  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:22.796446  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:23.297276  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:23.362076  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:23.797458  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:23.861875  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:24.297544  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:24.359857  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:24.796860  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:24.858271  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:25.297512  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:25.363052  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:25.797477  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:25.859936  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:26.297517  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:26.362919  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:26.797513  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:26.860353  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:27.297464  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:27.360836  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:27.797504  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:27.863817  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:28.296932  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:28.360058  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:28.797370  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:28.863631  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:29.297499  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:29.360729  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:29.796905  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:29.862814  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:30.297287  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:30.362031  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:30.797502  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:30.863023  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:31.297466  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:31.361176  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:31.797511  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:31.861727  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:32.297328  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:32.361630  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:32.797522  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:32.857604  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:33.296851  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:33.359724  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:33.797369  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:33.863618  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:34.296935  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:34.363838  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:34.797496  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:34.861661  102554 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1206 18:19:35.296914  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:19:35.360722  102554 command_runner.go:130] > NAME      SECRETS   AGE
	I1206 18:19:35.360897  102554 command_runner.go:130] > default   0         0s
	I1206 18:19:35.363865  102554 kubeadm.go:1088] duration metric: took 12.779599886s to wait for elevateKubeSystemPrivileges.
	I1206 18:19:35.363912  102554 kubeadm.go:406] StartCluster complete in 23.010512727s
	I1206 18:19:35.363935  102554 settings.go:142] acquiring lock: {Name:mk659e0e4749486c04957a41070055ba699e8e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:35.364011  102554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:19:35.364675  102554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/kubeconfig: {Name:mk369d6bc31165e4100c77201c4dc2786cd89bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:19:35.364887  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 18:19:35.364958  102554 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1206 18:19:35.365018  102554 addons.go:69] Setting storage-provisioner=true in profile "multinode-193731"
	I1206 18:19:35.365035  102554 addons.go:231] Setting addon storage-provisioner=true in "multinode-193731"
	I1206 18:19:35.365038  102554 addons.go:69] Setting default-storageclass=true in profile "multinode-193731"
	I1206 18:19:35.365068  102554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-193731"
	I1206 18:19:35.365094  102554 host.go:66] Checking if "multinode-193731" exists ...
	I1206 18:19:35.365155  102554 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:19:35.365266  102554 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:19:35.365428  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:35.365572  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:35.365597  102554 kapi.go:59] client config for multinode-193731: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:19:35.366379  102554 cert_rotation.go:137] Starting client certificate rotation controller
	I1206 18:19:35.366681  102554 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 18:19:35.366704  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.366715  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.366724  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:35.375012  102554 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1206 18:19:35.375051  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:35.375069  102554 round_trippers.go:580]     Audit-Id: c0c23060-11e1-4cf8-8b86-94bbb5b9d74a
	I1206 18:19:35.375079  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:35.375088  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:35.375097  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:35.375107  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:35.375126  102554 round_trippers.go:580]     Content-Length: 291
	I1206 18:19:35.375135  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:35 GMT
	I1206 18:19:35.375174  102554 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b5e38ad4-b7b3-450e-bec9-3b49e7e61e29","resourceVersion":"257","creationTimestamp":"2023-12-06T18:19:21Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 18:19:35.375615  102554 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b5e38ad4-b7b3-450e-bec9-3b49e7e61e29","resourceVersion":"257","creationTimestamp":"2023-12-06T18:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 18:19:35.375690  102554 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 18:19:35.375707  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.375718  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.375734  102554 round_trippers.go:473]     Content-Type: application/json
	I1206 18:19:35.375743  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:35.384285  102554 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1206 18:19:35.384315  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:35.384327  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:35.384337  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:35.384346  102554 round_trippers.go:580]     Content-Length: 291
	I1206 18:19:35.384354  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:35 GMT
	I1206 18:19:35.384362  102554 round_trippers.go:580]     Audit-Id: 33690378-bcf3-49fd-854d-5e36fb348ff7
	I1206 18:19:35.384374  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:35.384386  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:35.384420  102554 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b5e38ad4-b7b3-450e-bec9-3b49e7e61e29","resourceVersion":"335","creationTimestamp":"2023-12-06T18:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 18:19:35.384585  102554 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 18:19:35.384601  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.384611  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.384624  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:35.386410  102554 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:19:35.386613  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:19:35.386635  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:35.386646  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:35.386655  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:35.386664  102554 round_trippers.go:580]     Content-Length: 291
	I1206 18:19:35.386617  102554 kapi.go:59] client config for multinode-193731: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:19:35.386676  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:35 GMT
	I1206 18:19:35.386687  102554 round_trippers.go:580]     Audit-Id: a4e8f8ff-fd8a-4225-94e0-55179c1237ef
	I1206 18:19:35.386699  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:35.386711  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:35.386741  102554 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b5e38ad4-b7b3-450e-bec9-3b49e7e61e29","resourceVersion":"335","creationTimestamp":"2023-12-06T18:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1206 18:19:35.386826  102554 addons.go:231] Setting addon default-storageclass=true in "multinode-193731"
	I1206 18:19:35.386830  102554 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-193731" context rescaled to 1 replicas
	I1206 18:19:35.386853  102554 host.go:66] Checking if "multinode-193731" exists ...
	I1206 18:19:35.386861  102554 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:19:35.388588  102554 out.go:177] * Verifying Kubernetes components...
	I1206 18:19:35.387172  102554 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:19:35.390195  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:19:35.391757  102554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:19:35.393207  102554 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:19:35.393232  102554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 18:19:35.393284  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:35.408216  102554 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 18:19:35.408248  102554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 18:19:35.408330  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:19:35.414881  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:35.429097  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:19:35.449707  102554 command_runner.go:130] > apiVersion: v1
	I1206 18:19:35.449733  102554 command_runner.go:130] > data:
	I1206 18:19:35.449739  102554 command_runner.go:130] >   Corefile: |
	I1206 18:19:35.449744  102554 command_runner.go:130] >     .:53 {
	I1206 18:19:35.449748  102554 command_runner.go:130] >         errors
	I1206 18:19:35.449753  102554 command_runner.go:130] >         health {
	I1206 18:19:35.449758  102554 command_runner.go:130] >            lameduck 5s
	I1206 18:19:35.449762  102554 command_runner.go:130] >         }
	I1206 18:19:35.449768  102554 command_runner.go:130] >         ready
	I1206 18:19:35.449778  102554 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1206 18:19:35.449783  102554 command_runner.go:130] >            pods insecure
	I1206 18:19:35.449796  102554 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1206 18:19:35.449808  102554 command_runner.go:130] >            ttl 30
	I1206 18:19:35.449814  102554 command_runner.go:130] >         }
	I1206 18:19:35.449827  102554 command_runner.go:130] >         prometheus :9153
	I1206 18:19:35.449833  102554 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1206 18:19:35.449841  102554 command_runner.go:130] >            max_concurrent 1000
	I1206 18:19:35.449844  102554 command_runner.go:130] >         }
	I1206 18:19:35.449848  102554 command_runner.go:130] >         cache 30
	I1206 18:19:35.449855  102554 command_runner.go:130] >         loop
	I1206 18:19:35.449860  102554 command_runner.go:130] >         reload
	I1206 18:19:35.449870  102554 command_runner.go:130] >         loadbalance
	I1206 18:19:35.449876  102554 command_runner.go:130] >     }
	I1206 18:19:35.449886  102554 command_runner.go:130] > kind: ConfigMap
	I1206 18:19:35.449895  102554 command_runner.go:130] > metadata:
	I1206 18:19:35.449908  102554 command_runner.go:130] >   creationTimestamp: "2023-12-06T18:19:21Z"
	I1206 18:19:35.449915  102554 command_runner.go:130] >   name: coredns
	I1206 18:19:35.449920  102554 command_runner.go:130] >   namespace: kube-system
	I1206 18:19:35.449926  102554 command_runner.go:130] >   resourceVersion: "253"
	I1206 18:19:35.449932  102554 command_runner.go:130] >   uid: 7fa613ee-e7be-4d4b-97a7-b5166a7fd1f4
	I1206 18:19:35.450105  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 18:19:35.450424  102554 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:19:35.450723  102554 kapi.go:59] client config for multinode-193731: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:19:35.451062  102554 node_ready.go:35] waiting up to 6m0s for node "multinode-193731" to be "Ready" ...
	I1206 18:19:35.451173  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:35.451186  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.451197  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.451209  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:35.453387  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:35.453404  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:35.453410  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:35.453415  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:35.453420  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:35.453426  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:35.453433  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:35 GMT
	I1206 18:19:35.453441  102554 round_trippers.go:580]     Audit-Id: b6bb630f-bd12-4745-9e38-753c5321a07e
	I1206 18:19:35.453557  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"330","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:1
9:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1206 18:19:35.454276  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:35.454295  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.454305  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.454313  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:35.456675  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:35.456699  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:35.456710  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:35 GMT
	I1206 18:19:35.456720  102554 round_trippers.go:580]     Audit-Id: ea9334d3-fdc8-4319-a0e6-717c93884481
	I1206 18:19:35.456729  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:35.456746  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:35.456755  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:35.456774  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:35.456915  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"330","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:1
9:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I1206 18:19:35.519904  102554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 18:19:35.622900  102554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 18:19:35.957598  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:35.957624  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:35.957632  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:35.957637  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:36.002345  102554 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I1206 18:19:36.002384  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:36.002396  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:36.002405  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:36.002412  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:36.002419  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:36.002426  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:36 GMT
	I1206 18:19:36.002433  102554 round_trippers.go:580]     Audit-Id: 0ee069b7-dafe-4d2f-84b5-4ddd34452318
	I1206 18:19:36.003126  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:36.027513  102554 command_runner.go:130] > configmap/coredns replaced
	I1206 18:19:36.106515  102554 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1206 18:19:36.443813  102554 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1206 18:19:36.450850  102554 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1206 18:19:36.458038  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:36.458062  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:36.458073  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:36.458081  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:36.458645  102554 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1206 18:19:36.460961  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:36.460985  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:36.460992  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:36.460998  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:36.461004  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:36.461010  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:36 GMT
	I1206 18:19:36.461015  102554 round_trippers.go:580]     Audit-Id: 7071caa3-2c1f-4770-af87-7a51ccbf986e
	I1206 18:19:36.461025  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:36.461149  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:36.503361  102554 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1206 18:19:36.513587  102554 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1206 18:19:36.524691  102554 command_runner.go:130] > pod/storage-provisioner created
	I1206 18:19:36.530564  102554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.010619676s)
	I1206 18:19:36.530635  102554 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1206 18:19:36.530765  102554 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1206 18:19:36.530774  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:36.530786  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:36.530796  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:36.533948  102554 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 18:19:36.533990  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:36.534000  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:36.534010  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:36.534018  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:36.534027  102554 round_trippers.go:580]     Content-Length: 1273
	I1206 18:19:36.534044  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:36 GMT
	I1206 18:19:36.534054  102554 round_trippers.go:580]     Audit-Id: 1506822e-b27b-428d-ba0a-58380159b84d
	I1206 18:19:36.534069  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:36.534300  102554 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"standard","uid":"eecfa733-88e0-4871-9cb7-b0b7169370e3","resourceVersion":"386","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1206 18:19:36.534805  102554 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eecfa733-88e0-4871-9cb7-b0b7169370e3","resourceVersion":"386","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1206 18:19:36.534875  102554 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1206 18:19:36.534890  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:36.534900  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:36.534914  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:36.534926  102554 round_trippers.go:473]     Content-Type: application/json
	I1206 18:19:36.537727  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:36.537752  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:36.537763  102554 round_trippers.go:580]     Audit-Id: 9888b8d4-6a8f-46b2-bf3d-e047012446e6
	I1206 18:19:36.537773  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:36.537780  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:36.537790  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:36.537802  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:36.537812  102554 round_trippers.go:580]     Content-Length: 1220
	I1206 18:19:36.537824  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:36 GMT
	I1206 18:19:36.537859  102554 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eecfa733-88e0-4871-9cb7-b0b7169370e3","resourceVersion":"386","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1206 18:19:36.540805  102554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1206 18:19:36.542301  102554 addons.go:502] enable addons completed in 1.177338992s: enabled=[storage-provisioner default-storageclass]
	I1206 18:19:36.958393  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:36.958420  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:36.958432  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:36.958441  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:36.961145  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:36.961174  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:36.961181  102554 round_trippers.go:580]     Audit-Id: 36f92031-930f-4760-87b0-11dc3e2f0d5f
	I1206 18:19:36.961189  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:36.961197  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:36.961205  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:36.961216  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:36.961227  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:36 GMT
	I1206 18:19:36.961421  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:37.457961  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:37.457990  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:37.457999  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:37.458005  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:37.460241  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:37.460261  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:37.460286  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:37.460296  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:37 GMT
	I1206 18:19:37.460303  102554 round_trippers.go:580]     Audit-Id: 2032fe2e-d2f1-4c5e-bad9-7c8bedc3186c
	I1206 18:19:37.460309  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:37.460314  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:37.460319  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:37.460434  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:37.460761  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:37.958077  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:37.958099  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:37.958106  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:37.958112  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:37.960411  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:37.960436  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:37.960445  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:37.960452  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:37.960459  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:37.960467  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:37.960478  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:37 GMT
	I1206 18:19:37.960489  102554 round_trippers.go:580]     Audit-Id: 03c4d6bf-45be-4710-8951-24cce6153051
	I1206 18:19:37.960696  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:38.458259  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:38.458302  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:38.458311  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:38.458317  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:38.460804  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:38.460829  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:38.460838  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:38.460845  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:38 GMT
	I1206 18:19:38.460852  102554 round_trippers.go:580]     Audit-Id: d7842b47-0876-476e-9fff-af138cbd8400
	I1206 18:19:38.460859  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:38.460868  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:38.460879  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:38.461006  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:38.957658  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:38.957683  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:38.957694  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:38.957702  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:38.959861  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:38.959880  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:38.959887  102554 round_trippers.go:580]     Audit-Id: 66a3f179-cc3b-4afb-86c1-72a7bf1878e3
	I1206 18:19:38.959905  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:38.959910  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:38.959915  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:38.959920  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:38.959925  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:38 GMT
	I1206 18:19:38.960057  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:39.458162  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:39.458191  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:39.458200  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:39.458209  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:39.460659  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:39.460684  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:39.460693  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:39.460701  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:39.460708  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:39.460715  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:39.460721  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:39 GMT
	I1206 18:19:39.460729  102554 round_trippers.go:580]     Audit-Id: 9674e97c-5508-458c-9393-66c1afcccee1
	I1206 18:19:39.460833  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:39.461169  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:39.958541  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:39.958566  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:39.958575  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:39.958581  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:39.960880  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:39.960909  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:39.960980  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:39.961012  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:39.961020  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:39.961028  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:39 GMT
	I1206 18:19:39.961034  102554 round_trippers.go:580]     Audit-Id: 1a17be9c-65d4-486c-a9c5-a7fbc061f740
	I1206 18:19:39.961041  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:39.961180  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:40.457570  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:40.457599  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:40.457622  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:40.457629  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:40.460147  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:40.460174  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:40.460186  102554 round_trippers.go:580]     Audit-Id: 91aaf4c5-9387-4785-8154-f68ecfd8adbe
	I1206 18:19:40.460195  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:40.460204  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:40.460212  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:40.460222  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:40.460232  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:40 GMT
	I1206 18:19:40.460394  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:40.957899  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:40.957928  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:40.957942  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:40.957950  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:40.960232  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:40.960256  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:40.960285  102554 round_trippers.go:580]     Audit-Id: bbfde0c1-8d4b-4dca-bce7-91564a6768db
	I1206 18:19:40.960295  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:40.960305  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:40.960313  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:40.960320  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:40.960326  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:40 GMT
	I1206 18:19:40.960464  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:41.457983  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:41.458014  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:41.458023  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:41.458029  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:41.460351  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:41.460375  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:41.460382  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:41.460390  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:41.460398  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:41.460407  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:41.460425  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:41 GMT
	I1206 18:19:41.460438  102554 round_trippers.go:580]     Audit-Id: 5d2230bb-dd84-402c-bb43-67fb0c31b845
	I1206 18:19:41.460572  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:41.958210  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:41.958236  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:41.958244  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:41.958250  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:41.960404  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:41.960425  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:41.960432  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:41.960437  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:41.960443  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:41.960448  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:41.960456  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:41 GMT
	I1206 18:19:41.960463  102554 round_trippers.go:580]     Audit-Id: d73b35f1-fce6-4de3-8a9e-a28e1f5fefdd
	I1206 18:19:41.960590  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:41.960983  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:42.458227  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:42.458251  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:42.458266  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:42.458272  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:42.460570  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:42.460596  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:42.460606  102554 round_trippers.go:580]     Audit-Id: 76377838-e223-448e-ae52-52e39bafa757
	I1206 18:19:42.460613  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:42.460620  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:42.460628  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:42.460635  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:42.460644  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:42 GMT
	I1206 18:19:42.460766  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:42.958141  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:42.958172  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:42.958186  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:42.958197  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:42.960499  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:42.960520  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:42.960530  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:42.960536  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:42 GMT
	I1206 18:19:42.960541  102554 round_trippers.go:580]     Audit-Id: 4da9ebfa-c06d-4c2a-b478-d54910622ff2
	I1206 18:19:42.960546  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:42.960551  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:42.960573  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:42.960806  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:43.458451  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:43.458480  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:43.458489  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:43.458496  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:43.460734  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:43.460756  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:43.460763  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:43.460768  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:43.460774  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:43.460780  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:43 GMT
	I1206 18:19:43.460788  102554 round_trippers.go:580]     Audit-Id: 6b6ea2ef-3535-4a3b-9854-2c7f615fcc86
	I1206 18:19:43.460795  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:43.460925  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:43.957628  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:43.957657  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:43.957666  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:43.957673  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:43.959912  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:43.959936  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:43.959949  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:43 GMT
	I1206 18:19:43.959958  102554 round_trippers.go:580]     Audit-Id: 83b3210c-235e-465a-8609-6bf6a720c5b4
	I1206 18:19:43.959967  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:43.959974  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:43.959986  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:43.959998  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:43.960151  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:44.457827  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:44.457852  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:44.457861  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:44.457866  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:44.460043  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:44.460070  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:44.460077  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:44.460082  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:44.460088  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:44.460093  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:44 GMT
	I1206 18:19:44.460098  102554 round_trippers.go:580]     Audit-Id: 2ca282fc-45c3-4ae7-9f00-c342554e80e8
	I1206 18:19:44.460103  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:44.460242  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:44.460591  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:44.957882  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:44.957905  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:44.957913  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:44.957919  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:44.960179  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:44.960203  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:44.960215  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:44.960223  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:44.960230  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:44.960237  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:44.960245  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:44 GMT
	I1206 18:19:44.960254  102554 round_trippers.go:580]     Audit-Id: f83f2eac-034d-4794-abf1-dbbbf0e6f002
	I1206 18:19:44.960392  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:45.457732  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:45.457762  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:45.457770  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:45.457776  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:45.460047  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:45.460073  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:45.460082  102554 round_trippers.go:580]     Audit-Id: f0d97af3-a15f-4faa-98c5-faf1411e3911
	I1206 18:19:45.460089  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:45.460105  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:45.460113  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:45.460124  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:45.460131  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:45 GMT
	I1206 18:19:45.460230  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:45.957858  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:45.957884  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:45.957892  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:45.957898  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:45.960106  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:45.960125  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:45.960131  102554 round_trippers.go:580]     Audit-Id: 1d7b89a9-29d1-4095-a3be-7ea444bc2f6a
	I1206 18:19:45.960137  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:45.960142  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:45.960147  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:45.960155  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:45.960160  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:45 GMT
	I1206 18:19:45.960293  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:46.457868  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:46.457894  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:46.457902  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:46.457908  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:46.460351  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:46.460378  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:46.460389  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:46.460401  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:46 GMT
	I1206 18:19:46.460410  102554 round_trippers.go:580]     Audit-Id: 6bb7afe0-b81e-4f4a-8e01-0e44068b649e
	I1206 18:19:46.460418  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:46.460425  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:46.460431  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:46.460518  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:46.460850  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:46.958201  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:46.958222  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:46.958231  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:46.958237  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:46.962402  102554 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1206 18:19:46.962438  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:46.962450  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:46 GMT
	I1206 18:19:46.962459  102554 round_trippers.go:580]     Audit-Id: ab09e0d2-912a-4c93-b9ab-bd2773d73d29
	I1206 18:19:46.962467  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:46.962476  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:46.962485  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:46.962497  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:46.962637  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:47.458214  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:47.458241  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:47.458255  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:47.458264  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:47.460509  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:47.460532  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:47.460538  102554 round_trippers.go:580]     Audit-Id: 29d459d0-ec6b-4b8f-a6e5-b95c5dda1031
	I1206 18:19:47.460544  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:47.460549  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:47.460554  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:47.460561  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:47.460567  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:47 GMT
	I1206 18:19:47.460730  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:47.958431  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:47.958457  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:47.958465  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:47.958471  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:47.960736  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:47.960765  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:47.960774  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:47.960785  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:47.960794  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:47 GMT
	I1206 18:19:47.960803  102554 round_trippers.go:580]     Audit-Id: c220bda3-2f3d-4afc-9e35-0ef779e5b18f
	I1206 18:19:47.960809  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:47.960815  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:47.961035  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:48.457509  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:48.457534  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:48.457543  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:48.457549  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:48.459823  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:48.459848  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:48.459858  102554 round_trippers.go:580]     Audit-Id: b0f6ac6a-3c34-42bd-93e5-57cc6848f324
	I1206 18:19:48.459866  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:48.459873  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:48.459879  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:48.459884  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:48.459890  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:48 GMT
	I1206 18:19:48.459983  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:48.957561  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:48.957586  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:48.957594  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:48.957600  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:48.959851  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:48.959873  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:48.959879  102554 round_trippers.go:580]     Audit-Id: 2704859d-5144-45b6-bb9f-544f03c0c5a7
	I1206 18:19:48.959885  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:48.959890  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:48.959895  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:48.959900  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:48.959905  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:48 GMT
	I1206 18:19:48.960029  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:48.960435  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:49.458102  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:49.458126  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:49.458134  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:49.458142  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:49.460391  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:49.460423  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:49.460435  102554 round_trippers.go:580]     Audit-Id: 17c437eb-7eb0-46c8-9158-72fea4530ff4
	I1206 18:19:49.460450  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:49.460459  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:49.460471  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:49.460483  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:49.460497  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:49 GMT
	I1206 18:19:49.460624  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:49.958229  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:49.958255  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:49.958263  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:49.958269  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:49.960613  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:49.960640  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:49.960651  102554 round_trippers.go:580]     Audit-Id: e5e8a572-2ca2-4a89-8173-bcd88d03f8de
	I1206 18:19:49.960659  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:49.960664  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:49.960670  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:49.960678  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:49.960686  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:49 GMT
	I1206 18:19:49.960787  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:50.458356  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:50.458382  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:50.458391  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:50.458397  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:50.460736  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:50.460763  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:50.460770  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:50.460776  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:50.460781  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:50.460786  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:50 GMT
	I1206 18:19:50.460791  102554 round_trippers.go:580]     Audit-Id: 32b3595f-3f8d-4cc1-8d1c-cbe281e0f56c
	I1206 18:19:50.460796  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:50.460879  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:50.958462  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:50.958487  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:50.958495  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:50.958501  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:50.960822  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:50.960845  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:50.960855  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:50.960865  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:50.960874  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:50.960883  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:50 GMT
	I1206 18:19:50.960889  102554 round_trippers.go:580]     Audit-Id: 80661df9-0a63-42b9-8ab7-3056b26822e9
	I1206 18:19:50.960896  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:50.960987  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:50.961317  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:51.457537  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:51.457561  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:51.457569  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:51.457577  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:51.459844  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:51.459870  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:51.459879  102554 round_trippers.go:580]     Audit-Id: 67fa5abd-1ea2-4606-90b4-557b249475e7
	I1206 18:19:51.459888  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:51.459896  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:51.459905  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:51.459911  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:51.459919  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:51 GMT
	I1206 18:19:51.460014  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:51.957551  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:51.957577  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:51.957585  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:51.957591  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:51.960121  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:51.960145  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:51.960154  102554 round_trippers.go:580]     Audit-Id: c92b1294-7721-45ca-9d5a-cd36c3fd1bb0
	I1206 18:19:51.960161  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:51.960167  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:51.960172  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:51.960177  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:51.960182  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:51 GMT
	I1206 18:19:51.960308  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:52.457919  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:52.457949  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:52.457960  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:52.457970  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:52.460205  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:52.460230  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:52.460240  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:52.460247  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:52 GMT
	I1206 18:19:52.460255  102554 round_trippers.go:580]     Audit-Id: 35d571ab-7639-40ee-9617-5613e853b180
	I1206 18:19:52.460263  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:52.460293  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:52.460303  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:52.460401  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:52.957888  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:52.957911  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:52.957919  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:52.957925  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:52.960197  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:52.960218  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:52.960227  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:52.960236  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:52.960245  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:52 GMT
	I1206 18:19:52.960254  102554 round_trippers.go:580]     Audit-Id: c05866e5-163d-47ed-b68e-53979df02816
	I1206 18:19:52.960262  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:52.960295  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:52.960453  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:53.457885  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:53.457910  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:53.457918  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:53.457924  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:53.460167  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:53.460188  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:53.460195  102554 round_trippers.go:580]     Audit-Id: c13e2da1-d4c6-4a24-a8e8-df072d740569
	I1206 18:19:53.460202  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:53.460208  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:53.460216  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:53.460225  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:53.460235  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:53 GMT
	I1206 18:19:53.460354  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:53.460770  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:53.957900  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:53.957925  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:53.957933  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:53.957939  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:53.960305  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:53.960327  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:53.960337  102554 round_trippers.go:580]     Audit-Id: 62c684a4-b39c-482b-823e-f99324674d2d
	I1206 18:19:53.960345  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:53.960353  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:53.960362  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:53.960371  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:53.960381  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:53 GMT
	I1206 18:19:53.960537  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:54.458135  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:54.458155  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:54.458164  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:54.458170  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:54.460516  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:54.460546  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:54.460559  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:54.460569  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:54.460580  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:54 GMT
	I1206 18:19:54.460597  102554 round_trippers.go:580]     Audit-Id: 294e58d2-956d-4d86-af49-014f11ba479f
	I1206 18:19:54.460603  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:54.460611  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:54.460716  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:54.958411  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:54.958438  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:54.958446  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:54.958452  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:54.960646  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:54.960668  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:54.960677  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:54 GMT
	I1206 18:19:54.960685  102554 round_trippers.go:580]     Audit-Id: e9d42b31-b6fe-4415-b749-7193b3f7929a
	I1206 18:19:54.960694  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:54.960703  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:54.960711  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:54.960728  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:54.960876  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:55.457606  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:55.457633  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:55.457645  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:55.457653  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:55.459929  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:55.459959  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:55.459969  102554 round_trippers.go:580]     Audit-Id: 76f20f9e-2c7d-43f2-af4a-c72b49bd2ade
	I1206 18:19:55.459976  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:55.459983  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:55.459990  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:55.459998  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:55.460023  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:55 GMT
	I1206 18:19:55.460137  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:55.957749  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:55.957772  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:55.957781  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:55.957794  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:55.959983  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:55.960010  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:55.960019  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:55 GMT
	I1206 18:19:55.960027  102554 round_trippers.go:580]     Audit-Id: 475b1356-4c64-4c1d-b494-f4076992efc4
	I1206 18:19:55.960033  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:55.960038  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:55.960044  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:55.960048  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:55.960171  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:55.960524  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:56.457765  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:56.457791  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:56.457800  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:56.457806  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:56.460083  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:56.460110  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:56.460121  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:56.460128  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:56 GMT
	I1206 18:19:56.460136  102554 round_trippers.go:580]     Audit-Id: d61294e6-a6ff-4a48-9234-ab1ce5af1abd
	I1206 18:19:56.460144  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:56.460153  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:56.460165  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:56.460328  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:56.957770  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:56.957798  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:56.957806  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:56.957812  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:56.960098  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:56.960118  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:56.960127  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:56.960133  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:56.960138  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:56 GMT
	I1206 18:19:56.960143  102554 round_trippers.go:580]     Audit-Id: b918bc4c-51b5-4bf3-9f78-82bec2c4db8b
	I1206 18:19:56.960148  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:56.960153  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:56.960294  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:57.457890  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:57.457917  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:57.457925  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:57.457931  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:57.460307  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:57.460333  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:57.460343  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:57.460351  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:57 GMT
	I1206 18:19:57.460359  102554 round_trippers.go:580]     Audit-Id: c97b31ac-6af0-4d66-a61f-4e5fc5c9d7b7
	I1206 18:19:57.460367  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:57.460376  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:57.460385  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:57.460513  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:57.958104  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:57.958133  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:57.958142  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:57.958149  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:57.960516  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:57.960540  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:57.960548  102554 round_trippers.go:580]     Audit-Id: aeb3ce32-8754-4cc9-a1b3-a2a9acb2a83c
	I1206 18:19:57.960556  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:57.960564  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:57.960572  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:57.960579  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:57.960586  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:57 GMT
	I1206 18:19:57.960725  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:57.961036  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:19:58.458399  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:58.458426  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:58.458434  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:58.458441  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:58.460765  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:58.460792  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:58.460801  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:58.460809  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:58.460816  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:58 GMT
	I1206 18:19:58.460822  102554 round_trippers.go:580]     Audit-Id: 581e8b82-06ab-4ba3-a029-a31391576a93
	I1206 18:19:58.460830  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:58.460836  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:58.460964  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:58.957490  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:58.957516  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:58.957524  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:58.957543  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:58.959752  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:58.959780  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:58.959791  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:58.959800  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:58.959809  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:58.959817  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:58 GMT
	I1206 18:19:58.959827  102554 round_trippers.go:580]     Audit-Id: a2022f60-22e9-44a6-a87a-d7604d6b3563
	I1206 18:19:58.959832  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:58.959982  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:59.458013  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:59.458036  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:59.458044  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:59.458050  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:59.460207  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:59.460233  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:59.460242  102554 round_trippers.go:580]     Audit-Id: 3ce3388b-5e59-4db4-8c17-f45f2eed7174
	I1206 18:19:59.460249  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:59.460256  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:59.460282  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:59.460293  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:59.460306  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:59 GMT
	I1206 18:19:59.460402  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:19:59.957559  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:19:59.957591  102554 round_trippers.go:469] Request Headers:
	I1206 18:19:59.957603  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:19:59.957609  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:19:59.960279  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:19:59.960310  102554 round_trippers.go:577] Response Headers:
	I1206 18:19:59.960319  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:19:59.960324  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:19:59.960330  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:19:59.960337  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:19:59 GMT
	I1206 18:19:59.960342  102554 round_trippers.go:580]     Audit-Id: 87b87ffc-77cc-4982-aa7a-770351c7f0b8
	I1206 18:19:59.960353  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:19:59.960508  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:00.457970  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:00.457995  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:00.458003  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:00.458009  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:00.460217  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:00.460238  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:00.460248  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:00.460257  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:00 GMT
	I1206 18:20:00.460273  102554 round_trippers.go:580]     Audit-Id: 544e9d32-52f5-44cd-aaf0-2ed749482a5e
	I1206 18:20:00.460288  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:00.460297  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:00.460306  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:00.460414  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:00.460741  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:20:00.957988  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:00.958010  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:00.958018  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:00.958025  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:00.960349  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:00.960380  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:00.960391  102554 round_trippers.go:580]     Audit-Id: 0d1c44d9-9c00-48cc-8e56-40bcfb29b05d
	I1206 18:20:00.960400  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:00.960408  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:00.960458  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:00.960486  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:00.960492  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:00 GMT
	I1206 18:20:00.960626  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:01.458240  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:01.458269  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:01.458277  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:01.458283  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:01.460650  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:01.460672  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:01.460679  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:01.460685  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:01.460690  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:01.460697  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:01.460704  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:01 GMT
	I1206 18:20:01.460718  102554 round_trippers.go:580]     Audit-Id: 81197dd8-96b3-4ae3-b25c-4fbbf34cec61
	I1206 18:20:01.460848  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:01.958412  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:01.958441  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:01.958463  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:01.958471  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:01.960710  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:01.960731  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:01.960738  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:01.960743  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:01.960749  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:01.960754  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:01 GMT
	I1206 18:20:01.960759  102554 round_trippers.go:580]     Audit-Id: e2fc5856-7719-442d-b7b9-d12142a614e1
	I1206 18:20:01.960764  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:01.960861  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:02.457473  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:02.457501  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:02.457512  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:02.457520  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:02.459781  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:02.459808  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:02.459817  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:02.459826  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:02.459834  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:02.459841  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:02 GMT
	I1206 18:20:02.459847  102554 round_trippers.go:580]     Audit-Id: e46d298a-e0ec-4f5f-a531-af5c46f5b541
	I1206 18:20:02.459854  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:02.459961  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:02.957515  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:02.957552  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:02.957563  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:02.957571  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:02.959864  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:02.959899  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:02.959910  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:02 GMT
	I1206 18:20:02.959915  102554 round_trippers.go:580]     Audit-Id: 6532a36f-3d60-429d-84ba-d69c4c95f474
	I1206 18:20:02.959921  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:02.959926  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:02.959936  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:02.959944  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:02.960069  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:02.960411  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:20:03.457635  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:03.457661  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:03.457672  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:03.457680  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:03.459988  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:03.460009  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:03.460016  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:03 GMT
	I1206 18:20:03.460021  102554 round_trippers.go:580]     Audit-Id: 5c00b0e8-f7f8-4e53-b348-9eabc9f60a69
	I1206 18:20:03.460026  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:03.460031  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:03.460037  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:03.460042  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:03.460139  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:03.957709  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:03.957739  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:03.957749  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:03.957755  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:03.959892  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:03.959917  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:03.959924  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:03.959930  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:03.959936  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:03.959945  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:03 GMT
	I1206 18:20:03.959952  102554 round_trippers.go:580]     Audit-Id: 61fc572b-34f9-4003-b6c8-b7afc7380f5a
	I1206 18:20:03.959961  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:03.960141  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:04.457849  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:04.457877  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:04.457885  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:04.457891  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:04.460209  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:04.460232  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:04.460240  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:04.460249  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:04.460256  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:04 GMT
	I1206 18:20:04.460292  102554 round_trippers.go:580]     Audit-Id: d3db0e33-b394-458f-a662-94345e2935e9
	I1206 18:20:04.460306  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:04.460324  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:04.460452  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:04.957952  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:04.957976  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:04.957984  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:04.957990  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:04.960205  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:04.960230  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:04.960238  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:04.960246  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:04.960254  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:04.960261  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:04 GMT
	I1206 18:20:04.960285  102554 round_trippers.go:580]     Audit-Id: 9f3805e4-3f78-43c3-8335-5a9eb50ff568
	I1206 18:20:04.960294  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:04.960416  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:04.960731  102554 node_ready.go:58] node "multinode-193731" has status "Ready":"False"
	I1206 18:20:05.458165  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:05.458188  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:05.458196  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:05.458202  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:05.460361  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:05.460381  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:05.460388  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:05.460394  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:05.460399  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:05 GMT
	I1206 18:20:05.460404  102554 round_trippers.go:580]     Audit-Id: 1b43268f-84b1-43a9-b486-3319910ea086
	I1206 18:20:05.460411  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:05.460419  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:05.460509  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:05.958145  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:05.958168  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:05.958177  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:05.958183  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:05.960413  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:05.960434  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:05.960441  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:05.960447  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:05 GMT
	I1206 18:20:05.960452  102554 round_trippers.go:580]     Audit-Id: ddcb239c-3261-4c0f-8dbd-da6f5ae520f5
	I1206 18:20:05.960458  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:05.960466  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:05.960474  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:05.960663  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:06.458354  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:06.458382  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:06.458390  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:06.458397  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:06.460693  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:06.460717  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:06.460725  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:06.460730  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:06.460736  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:06.460741  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:06.460749  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:06 GMT
	I1206 18:20:06.460754  102554 round_trippers.go:580]     Audit-Id: 3ecb65c3-80a9-4096-adcc-2d901002f5b4
	I1206 18:20:06.460858  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:06.957555  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:06.957579  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:06.957587  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:06.957593  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:06.959720  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:06.959748  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:06.959758  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:06 GMT
	I1206 18:20:06.959766  102554 round_trippers.go:580]     Audit-Id: 587e2b39-f807-41e9-87e0-f22c5991e56c
	I1206 18:20:06.959773  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:06.959782  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:06.959790  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:06.959807  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:06.959906  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"347","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1206 18:20:07.457509  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:07.457550  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.457559  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.457565  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.459759  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:07.459785  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.459792  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.459797  102554 round_trippers.go:580]     Audit-Id: a74222af-177b-445d-b7eb-49b166ec681a
	I1206 18:20:07.459802  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.459807  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.459813  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.459820  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.459909  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:07.460197  102554 node_ready.go:49] node "multinode-193731" has status "Ready":"True"
	I1206 18:20:07.460212  102554 node_ready.go:38] duration metric: took 32.009129698s waiting for node "multinode-193731" to be "Ready" ...
	I1206 18:20:07.460221  102554 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:20:07.460295  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:07.460307  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.460314  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.460320  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.463249  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:07.463269  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.463276  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.463282  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.463287  102554 round_trippers.go:580]     Audit-Id: 157723d3-c8ad-4b57-b589-b39e5def5884
	I1206 18:20:07.463292  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.463298  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.463303  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.463876  102554 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"426","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1206 18:20:07.466902  102554 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:07.466972  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8t8qq
	I1206 18:20:07.466981  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.466988  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.466994  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.469043  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:07.469061  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.469075  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.469081  102554 round_trippers.go:580]     Audit-Id: 9f9758c0-6b50-4ce7-815e-7eb956744dfe
	I1206 18:20:07.469086  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.469091  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.469096  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.469101  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.469188  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"426","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1206 18:20:07.469559  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:07.469570  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.469577  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.469583  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.471414  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:07.471430  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.471436  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.471442  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.471450  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.471458  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.471468  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.471475  102554 round_trippers.go:580]     Audit-Id: 0024bf4c-06e4-4d54-a40b-846da4e340be
	I1206 18:20:07.471631  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:07.471941  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8t8qq
	I1206 18:20:07.471951  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.471959  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.471965  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.473770  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:07.473790  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.473799  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.473808  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.473817  102554 round_trippers.go:580]     Audit-Id: dd929e6b-9ac6-4ec9-9497-d2690e92dc43
	I1206 18:20:07.473824  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.473831  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.473838  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.473939  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"426","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1206 18:20:07.474339  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:07.474354  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.474360  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.474366  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.476010  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:07.476032  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.476041  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.476050  102554 round_trippers.go:580]     Audit-Id: 478d08e1-0aa0-4b6c-9504-bb89e5939719
	I1206 18:20:07.476058  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.476063  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.476068  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.476077  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.476182  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:07.977294  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8t8qq
	I1206 18:20:07.977322  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.977343  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.977352  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.979578  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:07.979600  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.979610  102554 round_trippers.go:580]     Audit-Id: 0ffefbd9-eee2-42d7-8d80-f9060f56439e
	I1206 18:20:07.979618  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.979626  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.979633  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.979674  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.979686  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.979839  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"435","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1206 18:20:07.980459  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:07.980478  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:07.980486  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:07.980491  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:07.982392  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:07.982415  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:07.982424  102554 round_trippers.go:580]     Audit-Id: c91be03f-8536-4bd2-83d0-d5594d76b856
	I1206 18:20:07.982431  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:07.982439  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:07.982447  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:07.982456  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:07.982469  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:07 GMT
	I1206 18:20:07.982610  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:08.477167  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8t8qq
	I1206 18:20:08.477194  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.477202  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.477209  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.479483  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:08.479506  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.479512  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.479518  102554 round_trippers.go:580]     Audit-Id: c41f834f-f1d8-4848-a9e6-ee711413359e
	I1206 18:20:08.479523  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.479528  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.479536  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.479542  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.479714  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"439","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1206 18:20:08.480326  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:08.480345  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.480353  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.480361  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.482354  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:08.482376  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.482386  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.482394  102554 round_trippers.go:580]     Audit-Id: 5a202ca0-8d82-48ed-b994-f1df94032f1d
	I1206 18:20:08.482402  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.482410  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.482419  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.482430  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.482516  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:08.482821  102554 pod_ready.go:92] pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:08.482837  102554 pod_ready.go:81] duration metric: took 1.015912002s waiting for pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.482846  102554 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.482898  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-193731
	I1206 18:20:08.482913  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.482920  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.482929  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.486204  102554 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 18:20:08.486229  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.486239  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.486248  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.486256  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.486263  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.486272  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.486283  102554 round_trippers.go:580]     Audit-Id: 1866ad63-660b-45a1-86b4-9d435e07f904
	I1206 18:20:08.486444  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-193731","namespace":"kube-system","uid":"57fe8b0e-15d1-4fb5-9c5e-d3831f895fcb","resourceVersion":"321","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e89b11cad76127f5960df69b9190cfbe","kubernetes.io/config.mirror":"e89b11cad76127f5960df69b9190cfbe","kubernetes.io/config.seen":"2023-12-06T18:19:21.802918804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1206 18:20:08.486896  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:08.486915  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.486923  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.486931  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.488697  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:08.488719  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.488730  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.488738  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.488747  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.488753  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.488760  102554 round_trippers.go:580]     Audit-Id: d4277512-b70a-4036-ac9e-f6120f677f3e
	I1206 18:20:08.488766  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.488893  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:08.489226  102554 pod_ready.go:92] pod "etcd-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:08.489242  102554 pod_ready.go:81] duration metric: took 6.390556ms waiting for pod "etcd-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.489259  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.489314  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-193731
	I1206 18:20:08.489323  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.489329  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.489337  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.491057  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:08.491080  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.491090  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.491099  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.491107  102554 round_trippers.go:580]     Audit-Id: 69f4e1b1-15cc-43fa-b9e7-d072de1e20ff
	I1206 18:20:08.491114  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.491126  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.491138  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.491255  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-193731","namespace":"kube-system","uid":"0a8201e6-4f4c-40f5-855d-4e80f2c90ac3","resourceVersion":"289","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"2262c72455a75ed17e147e54641ca32e","kubernetes.io/config.mirror":"2262c72455a75ed17e147e54641ca32e","kubernetes.io/config.seen":"2023-12-06T18:19:21.802924979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1206 18:20:08.491639  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:08.491650  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.491656  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.491663  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.493582  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:08.493600  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.493608  102554 round_trippers.go:580]     Audit-Id: afad6c39-1d26-4d4d-a00e-fd1824087713
	I1206 18:20:08.493614  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.493619  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.493624  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.493630  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.493638  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.493779  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:08.494087  102554 pod_ready.go:92] pod "kube-apiserver-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:08.494106  102554 pod_ready.go:81] duration metric: took 4.832556ms waiting for pod "kube-apiserver-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.494115  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.494157  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-193731
	I1206 18:20:08.494165  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.494171  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.494177  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.495801  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:08.495822  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.495832  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.495842  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.495853  102554 round_trippers.go:580]     Audit-Id: c9fae2eb-222c-4de0-862e-97e850fc9b5f
	I1206 18:20:08.495864  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.495879  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.495892  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.496008  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-193731","namespace":"kube-system","uid":"f7525d0f-d8fd-4494-bfaa-9887b29c993f","resourceVersion":"294","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"beed0bbae2db36b2912cd72c43112ba8","kubernetes.io/config.mirror":"beed0bbae2db36b2912cd72c43112ba8","kubernetes.io/config.seen":"2023-12-06T18:19:21.802926336Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1206 18:20:08.657809  102554 request.go:629] Waited for 161.302916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:08.657885  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:08.657896  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.657909  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.657922  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.660006  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:08.660025  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.660032  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.660037  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.660043  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.660048  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.660053  102554 round_trippers.go:580]     Audit-Id: fce75907-b363-4f8d-b8fb-1141c43fba13
	I1206 18:20:08.660058  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.660141  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:08.660503  102554 pod_ready.go:92] pod "kube-controller-manager-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:08.660523  102554 pod_ready.go:81] duration metric: took 166.402093ms waiting for pod "kube-controller-manager-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.660534  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbznd" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:08.857970  102554 request.go:629] Waited for 197.373752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbznd
	I1206 18:20:08.858047  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbznd
	I1206 18:20:08.858052  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:08.858060  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:08.858067  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:08.860338  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:08.860361  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:08.860370  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:08.860379  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:08.860388  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:08.860397  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:08.860405  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:08 GMT
	I1206 18:20:08.860417  102554 round_trippers.go:580]     Audit-Id: a097f4b1-b726-4cc3-99a8-9f335ad8b3cc
	I1206 18:20:08.860557  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbznd","generateName":"kube-proxy-","namespace":"kube-system","uid":"5400eb49-6ef8-4329-9b5a-799dceda044a","resourceVersion":"407","creationTimestamp":"2023-12-06T18:19:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1206 18:20:09.058325  102554 request.go:629] Waited for 197.348238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:09.058390  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:09.058395  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.058403  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.058409  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.060665  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:09.060687  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.060694  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.060700  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.060705  102554 round_trippers.go:580]     Audit-Id: b37c48ba-822b-416c-93e0-337e439ca06c
	I1206 18:20:09.060711  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.060718  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.060724  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.060870  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:09.061224  102554 pod_ready.go:92] pod "kube-proxy-tbznd" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:09.061241  102554 pod_ready.go:81] duration metric: took 400.698167ms waiting for pod "kube-proxy-tbznd" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:09.061254  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:09.258534  102554 request.go:629] Waited for 197.211963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-193731
	I1206 18:20:09.258608  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-193731
	I1206 18:20:09.258614  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.258625  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.258638  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.261138  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:09.261165  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.261182  102554 round_trippers.go:580]     Audit-Id: 39b5dc6f-8070-4c30-b719-856105e8fabf
	I1206 18:20:09.261191  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.261199  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.261208  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.261216  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.261225  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.261344  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-193731","namespace":"kube-system","uid":"a64c0992-f8c6-4baf-b702-d3209993bff4","resourceVersion":"293","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1494cd8bca68c3af3dc9054b9947349f","kubernetes.io/config.mirror":"1494cd8bca68c3af3dc9054b9947349f","kubernetes.io/config.seen":"2023-12-06T18:19:21.802927564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1206 18:20:09.458141  102554 request.go:629] Waited for 196.351766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:09.458233  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:09.458244  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.458256  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.458270  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.460824  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:09.460852  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.460862  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.460871  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.460880  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.460889  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.460903  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.460913  102554 round_trippers.go:580]     Audit-Id: 9d8baf5a-b10c-4a9c-8980-9fbe89078b81
	I1206 18:20:09.461038  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:09.461361  102554 pod_ready.go:92] pod "kube-scheduler-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:09.461385  102554 pod_ready.go:81] duration metric: took 400.124234ms waiting for pod "kube-scheduler-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:09.461394  102554 pod_ready.go:38] duration metric: took 2.001160773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:20:09.461420  102554 api_server.go:52] waiting for apiserver process to appear ...
	I1206 18:20:09.461467  102554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:20:09.471783  102554 command_runner.go:130] > 1440
	I1206 18:20:09.472562  102554 api_server.go:72] duration metric: took 34.085668316s to wait for apiserver process to appear ...
	I1206 18:20:09.472581  102554 api_server.go:88] waiting for apiserver healthz status ...
	I1206 18:20:09.472596  102554 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1206 18:20:09.476645  102554 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1206 18:20:09.476708  102554 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1206 18:20:09.476715  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.476726  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.476739  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.477859  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:09.477877  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.477889  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.477894  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.477901  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.477907  102554 round_trippers.go:580]     Content-Length: 264
	I1206 18:20:09.477915  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.477923  102554 round_trippers.go:580]     Audit-Id: fd61c02e-0dac-4bf9-a456-0c6c3336687f
	I1206 18:20:09.477928  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.477952  102554 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1206 18:20:09.478046  102554 api_server.go:141] control plane version: v1.28.4
	I1206 18:20:09.478066  102554 api_server.go:131] duration metric: took 5.478783ms to wait for apiserver health ...
	I1206 18:20:09.478075  102554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 18:20:09.658512  102554 request.go:629] Waited for 180.358421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:09.658578  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:09.658586  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.658598  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.658621  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.661567  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:09.661591  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.661606  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.661621  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.661630  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.661640  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.661655  102554 round_trippers.go:580]     Audit-Id: f818926f-2aee-43dd-8959-bfd7b353a406
	I1206 18:20:09.661667  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.662090  102554 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"439","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1206 18:20:09.664011  102554 system_pods.go:59] 8 kube-system pods found
	I1206 18:20:09.664040  102554 system_pods.go:61] "coredns-5dd5756b68-8t8qq" [b3765e1c-caa3-48e6-b18b-d1eec4d40452] Running
	I1206 18:20:09.664046  102554 system_pods.go:61] "etcd-multinode-193731" [57fe8b0e-15d1-4fb5-9c5e-d3831f895fcb] Running
	I1206 18:20:09.664052  102554 system_pods.go:61] "kindnet-8ldk5" [f5c0a719-e90e-4444-b144-e0b6f4d0db38] Running
	I1206 18:20:09.664059  102554 system_pods.go:61] "kube-apiserver-multinode-193731" [0a8201e6-4f4c-40f5-855d-4e80f2c90ac3] Running
	I1206 18:20:09.664073  102554 system_pods.go:61] "kube-controller-manager-multinode-193731" [f7525d0f-d8fd-4494-bfaa-9887b29c993f] Running
	I1206 18:20:09.664083  102554 system_pods.go:61] "kube-proxy-tbznd" [5400eb49-6ef8-4329-9b5a-799dceda044a] Running
	I1206 18:20:09.664091  102554 system_pods.go:61] "kube-scheduler-multinode-193731" [a64c0992-f8c6-4baf-b702-d3209993bff4] Running
	I1206 18:20:09.664100  102554 system_pods.go:61] "storage-provisioner" [635b29b3-0829-4e31-b46f-8ae9b78c6bb2] Running
	I1206 18:20:09.664108  102554 system_pods.go:74] duration metric: took 186.023193ms to wait for pod list to return data ...
	I1206 18:20:09.664121  102554 default_sa.go:34] waiting for default service account to be created ...
	I1206 18:20:09.858556  102554 request.go:629] Waited for 194.35255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1206 18:20:09.858629  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1206 18:20:09.858636  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:09.858646  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:09.858659  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:09.861197  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:09.861221  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:09.861228  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:09.861233  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:09.861239  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:09.861245  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:09.861250  102554 round_trippers.go:580]     Content-Length: 261
	I1206 18:20:09.861255  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:09 GMT
	I1206 18:20:09.861260  102554 round_trippers.go:580]     Audit-Id: 31bdaae3-dd9c-4679-8f8b-0f1f59f10474
	I1206 18:20:09.861280  102554 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0744fe70-00fe-4a7a-9c5b-fc72fc5fc22a","resourceVersion":"331","creationTimestamp":"2023-12-06T18:19:35Z"}}]}
	I1206 18:20:09.861456  102554 default_sa.go:45] found service account: "default"
	I1206 18:20:09.861473  102554 default_sa.go:55] duration metric: took 197.343204ms for default service account to be created ...
	I1206 18:20:09.861481  102554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 18:20:10.057935  102554 request.go:629] Waited for 196.368779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:10.058001  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:10.058008  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:10.058018  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:10.058028  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:10.061237  102554 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1206 18:20:10.061261  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:10.061270  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:10.061279  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:10.061287  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:10 GMT
	I1206 18:20:10.061296  102554 round_trippers.go:580]     Audit-Id: c6b42a82-bac2-43b8-9cce-30efbe297324
	I1206 18:20:10.061312  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:10.061321  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:10.061683  102554 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"439","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1206 18:20:10.063324  102554 system_pods.go:86] 8 kube-system pods found
	I1206 18:20:10.063342  102554 system_pods.go:89] "coredns-5dd5756b68-8t8qq" [b3765e1c-caa3-48e6-b18b-d1eec4d40452] Running
	I1206 18:20:10.063347  102554 system_pods.go:89] "etcd-multinode-193731" [57fe8b0e-15d1-4fb5-9c5e-d3831f895fcb] Running
	I1206 18:20:10.063351  102554 system_pods.go:89] "kindnet-8ldk5" [f5c0a719-e90e-4444-b144-e0b6f4d0db38] Running
	I1206 18:20:10.063355  102554 system_pods.go:89] "kube-apiserver-multinode-193731" [0a8201e6-4f4c-40f5-855d-4e80f2c90ac3] Running
	I1206 18:20:10.063361  102554 system_pods.go:89] "kube-controller-manager-multinode-193731" [f7525d0f-d8fd-4494-bfaa-9887b29c993f] Running
	I1206 18:20:10.063366  102554 system_pods.go:89] "kube-proxy-tbznd" [5400eb49-6ef8-4329-9b5a-799dceda044a] Running
	I1206 18:20:10.063370  102554 system_pods.go:89] "kube-scheduler-multinode-193731" [a64c0992-f8c6-4baf-b702-d3209993bff4] Running
	I1206 18:20:10.063374  102554 system_pods.go:89] "storage-provisioner" [635b29b3-0829-4e31-b46f-8ae9b78c6bb2] Running
	I1206 18:20:10.063380  102554 system_pods.go:126] duration metric: took 201.895199ms to wait for k8s-apps to be running ...
	I1206 18:20:10.063387  102554 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:20:10.063432  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:20:10.073861  102554 system_svc.go:56] duration metric: took 10.453915ms WaitForService to wait for kubelet.
	I1206 18:20:10.073891  102554 kubeadm.go:581] duration metric: took 34.686999248s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:20:10.073920  102554 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:20:10.258319  102554 request.go:629] Waited for 184.326494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1206 18:20:10.258384  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1206 18:20:10.258389  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:10.258396  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:10.258403  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:10.260567  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:10.260591  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:10.260602  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:10.260611  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:10 GMT
	I1206 18:20:10.260620  102554 round_trippers.go:580]     Audit-Id: cd58dcc8-4151-4627-9c0c-785a605f0a96
	I1206 18:20:10.260630  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:10.260640  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:10.260645  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:10.260782  102554 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1206 18:20:10.261140  102554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 18:20:10.261162  102554 node_conditions.go:123] node cpu capacity is 8
	I1206 18:20:10.261175  102554 node_conditions.go:105] duration metric: took 187.249464ms to run NodePressure ...
	I1206 18:20:10.261190  102554 start.go:228] waiting for startup goroutines ...
	I1206 18:20:10.261204  102554 start.go:233] waiting for cluster config update ...
	I1206 18:20:10.261220  102554 start.go:242] writing updated cluster config ...
	I1206 18:20:10.263510  102554 out.go:177] 
	I1206 18:20:10.265220  102554 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:20:10.265306  102554 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json ...
	I1206 18:20:10.267074  102554 out.go:177] * Starting worker node multinode-193731-m02 in cluster multinode-193731
	I1206 18:20:10.268402  102554 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:20:10.269798  102554 out.go:177] * Pulling base image ...
	I1206 18:20:10.271663  102554 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:20:10.271685  102554 cache.go:56] Caching tarball of preloaded images
	I1206 18:20:10.271723  102554 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:20:10.271779  102554 preload.go:174] Found /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 18:20:10.271792  102554 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 18:20:10.271868  102554 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json ...
	I1206 18:20:10.288521  102554 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:20:10.288549  102554 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	I1206 18:20:10.288581  102554 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:20:10.288619  102554 start.go:365] acquiring machines lock for multinode-193731-m02: {Name:mkfb971646eeb65adfefcec163861a35bc78a1aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:20:10.288735  102554 start.go:369] acquired machines lock for "multinode-193731-m02" in 96.146µs
	I1206 18:20:10.288764  102554 start.go:93] Provisioning new machine with config: &{Name:multinode-193731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 18:20:10.288857  102554 start.go:125] createHost starting for "m02" (driver="docker")
	I1206 18:20:10.291074  102554 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1206 18:20:10.291199  102554 start.go:159] libmachine.API.Create for "multinode-193731" (driver="docker")
	I1206 18:20:10.291223  102554 client.go:168] LocalClient.Create starting
	I1206 18:20:10.291295  102554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem
	I1206 18:20:10.291342  102554 main.go:141] libmachine: Decoding PEM data...
	I1206 18:20:10.291366  102554 main.go:141] libmachine: Parsing certificate...
	I1206 18:20:10.291429  102554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem
	I1206 18:20:10.291457  102554 main.go:141] libmachine: Decoding PEM data...
	I1206 18:20:10.291478  102554 main.go:141] libmachine: Parsing certificate...
	I1206 18:20:10.291694  102554 cli_runner.go:164] Run: docker network inspect multinode-193731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:20:10.307950  102554 network_create.go:77] Found existing network {name:multinode-193731 subnet:0xc002a3cde0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1206 18:20:10.307995  102554 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-193731-m02" container
	I1206 18:20:10.308059  102554 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:20:10.323001  102554 cli_runner.go:164] Run: docker volume create multinode-193731-m02 --label name.minikube.sigs.k8s.io=multinode-193731-m02 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:20:10.339343  102554 oci.go:103] Successfully created a docker volume multinode-193731-m02
	I1206 18:20:10.339412  102554 cli_runner.go:164] Run: docker run --rm --name multinode-193731-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-193731-m02 --entrypoint /usr/bin/test -v multinode-193731-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib
	I1206 18:20:10.858562  102554 oci.go:107] Successfully prepared a docker volume multinode-193731-m02
	I1206 18:20:10.858620  102554 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:20:10.858642  102554 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 18:20:10.858715  102554 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-193731-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 18:20:16.008931  102554 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-193731-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -I lz4 -xf /preloaded.tar -C /extractDir: (5.15017865s)
	I1206 18:20:16.008968  102554 kic.go:203] duration metric: took 5.150323 seconds to extract preloaded images to volume
	W1206 18:20:16.009112  102554 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:20:16.009233  102554 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:20:16.068195  102554 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-193731-m02 --name multinode-193731-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-193731-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-193731-m02 --network multinode-193731 --ip 192.168.58.3 --volume multinode-193731-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:20:16.366058  102554 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Running}}
	I1206 18:20:16.382754  102554 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Status}}
	I1206 18:20:16.399753  102554 cli_runner.go:164] Run: docker exec multinode-193731-m02 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:20:16.458089  102554 oci.go:144] the created container "multinode-193731-m02" has a running status.
	I1206 18:20:16.458136  102554 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa...
	I1206 18:20:16.696487  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 18:20:16.696531  102554 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:20:16.720582  102554 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Status}}
	I1206 18:20:16.737918  102554 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:20:16.737940  102554 kic_runner.go:114] Args: [docker exec --privileged multinode-193731-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:20:16.814949  102554 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Status}}
	I1206 18:20:16.833633  102554 machine.go:88] provisioning docker machine ...
	I1206 18:20:16.833668  102554 ubuntu.go:169] provisioning hostname "multinode-193731-m02"
	I1206 18:20:16.833721  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:16.853452  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:20:16.853770  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1206 18:20:16.853784  102554 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-193731-m02 && echo "multinode-193731-m02" | sudo tee /etc/hostname
	I1206 18:20:17.083387  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-193731-m02
	
	I1206 18:20:17.083506  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:17.101870  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:20:17.102321  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1206 18:20:17.102350  102554 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-193731-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-193731-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-193731-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:20:17.224390  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:20:17.224433  102554 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:20:17.224458  102554 ubuntu.go:177] setting up certificates
	I1206 18:20:17.224480  102554 provision.go:83] configureAuth start
	I1206 18:20:17.224554  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731-m02
	I1206 18:20:17.240525  102554 provision.go:138] copyHostCerts
	I1206 18:20:17.240565  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:20:17.240595  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:20:17.240604  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:20:17.240676  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:20:17.240748  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:20:17.240766  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:20:17.240773  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:20:17.240796  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:20:17.240842  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:20:17.240857  102554 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:20:17.240863  102554 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:20:17.240881  102554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:20:17.240928  102554 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.multinode-193731-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-193731-m02]
	I1206 18:20:17.434211  102554 provision.go:172] copyRemoteCerts
	I1206 18:20:17.434273  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:20:17.434308  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:17.450889  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:20:17.544623  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 18:20:17.544694  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:20:17.566044  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 18:20:17.566106  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1206 18:20:17.587286  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 18:20:17.587358  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 18:20:17.608016  102554 provision.go:86] duration metric: configureAuth took 383.519363ms
	I1206 18:20:17.608043  102554 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:20:17.608206  102554 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:20:17.608350  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:17.623936  102554 main.go:141] libmachine: Using SSH client type: native
	I1206 18:20:17.624426  102554 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1206 18:20:17.624454  102554 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:20:17.830724  102554 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:20:17.830753  102554 machine.go:91] provisioned docker machine in 997.097578ms
	I1206 18:20:17.830765  102554 client.go:171] LocalClient.Create took 7.539534095s
	I1206 18:20:17.830785  102554 start.go:167] duration metric: libmachine.API.Create for "multinode-193731" took 7.539585842s
	I1206 18:20:17.830795  102554 start.go:300] post-start starting for "multinode-193731-m02" (driver="docker")
	I1206 18:20:17.830808  102554 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:20:17.830873  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:20:17.830922  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:17.848137  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:20:17.936898  102554 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:20:17.940033  102554 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1206 18:20:17.940058  102554 command_runner.go:130] > NAME="Ubuntu"
	I1206 18:20:17.940067  102554 command_runner.go:130] > VERSION_ID="22.04"
	I1206 18:20:17.940077  102554 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1206 18:20:17.940086  102554 command_runner.go:130] > VERSION_CODENAME=jammy
	I1206 18:20:17.940092  102554 command_runner.go:130] > ID=ubuntu
	I1206 18:20:17.940103  102554 command_runner.go:130] > ID_LIKE=debian
	I1206 18:20:17.940110  102554 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1206 18:20:17.940118  102554 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1206 18:20:17.940124  102554 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1206 18:20:17.940133  102554 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1206 18:20:17.940143  102554 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1206 18:20:17.940213  102554 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:20:17.940237  102554 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:20:17.940246  102554 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:20:17.940254  102554 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:20:17.940286  102554 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:20:17.940342  102554 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:20:17.940408  102554 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:20:17.940417  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /etc/ssl/certs/163462.pem
	I1206 18:20:17.940507  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:20:17.947996  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:20:17.969585  102554 start.go:303] post-start completed in 138.776319ms
	I1206 18:20:17.969942  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731-m02
	I1206 18:20:17.986222  102554 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/config.json ...
	I1206 18:20:17.986543  102554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:20:17.986599  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:18.002338  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:20:18.088867  102554 command_runner.go:130] > 24%!
	(MISSING)I1206 18:20:18.088948  102554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:20:18.092683  102554 command_runner.go:130] > 223G
	I1206 18:20:18.092826  102554 start.go:128] duration metric: createHost completed in 7.803953635s
	I1206 18:20:18.092851  102554 start.go:83] releasing machines lock for "multinode-193731-m02", held for 7.804103551s
	I1206 18:20:18.092916  102554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731-m02
	I1206 18:20:18.111942  102554 out.go:177] * Found network options:
	I1206 18:20:18.113825  102554 out.go:177]   - NO_PROXY=192.168.58.2
	W1206 18:20:18.115444  102554 proxy.go:119] fail to check proxy env: Error ip not in block
	W1206 18:20:18.115497  102554 proxy.go:119] fail to check proxy env: Error ip not in block
	I1206 18:20:18.115581  102554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:20:18.115629  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:18.115628  102554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:20:18.115766  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:20:18.134451  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:20:18.134789  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:20:18.312985  102554 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 18:20:18.359080  102554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:20:18.363360  102554 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1206 18:20:18.363390  102554 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1206 18:20:18.363396  102554 command_runner.go:130] > Device: b0h/176d	Inode: 539827      Links: 1
	I1206 18:20:18.363402  102554 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:20:18.363408  102554 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1206 18:20:18.363413  102554 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1206 18:20:18.363418  102554 command_runner.go:130] > Change: 2023-12-06 18:00:33.726480179 +0000
	I1206 18:20:18.363423  102554 command_runner.go:130] >  Birth: 2023-12-06 18:00:33.726480179 +0000
	I1206 18:20:18.363473  102554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:20:18.380783  102554 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:20:18.380863  102554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:20:18.407046  102554 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1206 18:20:18.407113  102554 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:20:18.407123  102554 start.go:475] detecting cgroup driver to use...
	I1206 18:20:18.407159  102554 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:20:18.407210  102554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:20:18.420385  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:20:18.430074  102554 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:20:18.430133  102554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:20:18.442071  102554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:20:18.455972  102554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:20:18.529996  102554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:20:18.609079  102554 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1206 18:20:18.609105  102554 docker.go:219] disabling docker service ...
	I1206 18:20:18.609157  102554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:20:18.626449  102554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:20:18.636590  102554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:20:18.646627  102554 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1206 18:20:18.711672  102554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:20:18.792570  102554 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1206 18:20:18.792638  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:20:18.802992  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:20:18.817741  102554 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 18:20:18.817791  102554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1206 18:20:18.817840  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:20:18.826597  102554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:20:18.826668  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:20:18.835293  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:20:18.843697  102554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:20:18.853050  102554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:20:18.861302  102554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:20:18.868205  102554 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 18:20:18.868847  102554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:20:18.876607  102554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:20:18.948815  102554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:20:19.053763  102554 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:20:19.053827  102554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:20:19.057073  102554 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 18:20:19.057098  102554 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 18:20:19.057106  102554 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1206 18:20:19.057118  102554 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:20:19.057130  102554 command_runner.go:130] > Access: 2023-12-06 18:20:19.044261540 +0000
	I1206 18:20:19.057143  102554 command_runner.go:130] > Modify: 2023-12-06 18:20:19.044261540 +0000
	I1206 18:20:19.057154  102554 command_runner.go:130] > Change: 2023-12-06 18:20:19.044261540 +0000
	I1206 18:20:19.057165  102554 command_runner.go:130] >  Birth: -
	I1206 18:20:19.057192  102554 start.go:543] Will wait 60s for crictl version
	I1206 18:20:19.057235  102554 ssh_runner.go:195] Run: which crictl
	I1206 18:20:19.059922  102554 command_runner.go:130] > /usr/bin/crictl
	I1206 18:20:19.059993  102554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:20:19.089921  102554 command_runner.go:130] > Version:  0.1.0
	I1206 18:20:19.089949  102554 command_runner.go:130] > RuntimeName:  cri-o
	I1206 18:20:19.089956  102554 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1206 18:20:19.089965  102554 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 18:20:19.091897  102554 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 18:20:19.091963  102554 ssh_runner.go:195] Run: crio --version
	I1206 18:20:19.126970  102554 command_runner.go:130] > crio version 1.24.6
	I1206 18:20:19.127000  102554 command_runner.go:130] > Version:          1.24.6
	I1206 18:20:19.127013  102554 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1206 18:20:19.127021  102554 command_runner.go:130] > GitTreeState:     clean
	I1206 18:20:19.127031  102554 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1206 18:20:19.127041  102554 command_runner.go:130] > GoVersion:        go1.18.2
	I1206 18:20:19.127046  102554 command_runner.go:130] > Compiler:         gc
	I1206 18:20:19.127051  102554 command_runner.go:130] > Platform:         linux/amd64
	I1206 18:20:19.127056  102554 command_runner.go:130] > Linkmode:         dynamic
	I1206 18:20:19.127064  102554 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 18:20:19.127071  102554 command_runner.go:130] > SeccompEnabled:   true
	I1206 18:20:19.127075  102554 command_runner.go:130] > AppArmorEnabled:  false
	I1206 18:20:19.127154  102554 ssh_runner.go:195] Run: crio --version
	I1206 18:20:19.160258  102554 command_runner.go:130] > crio version 1.24.6
	I1206 18:20:19.160298  102554 command_runner.go:130] > Version:          1.24.6
	I1206 18:20:19.160308  102554 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1206 18:20:19.160314  102554 command_runner.go:130] > GitTreeState:     clean
	I1206 18:20:19.160323  102554 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1206 18:20:19.160331  102554 command_runner.go:130] > GoVersion:        go1.18.2
	I1206 18:20:19.160337  102554 command_runner.go:130] > Compiler:         gc
	I1206 18:20:19.160345  102554 command_runner.go:130] > Platform:         linux/amd64
	I1206 18:20:19.160357  102554 command_runner.go:130] > Linkmode:         dynamic
	I1206 18:20:19.160370  102554 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1206 18:20:19.160381  102554 command_runner.go:130] > SeccompEnabled:   true
	I1206 18:20:19.160389  102554 command_runner.go:130] > AppArmorEnabled:  false
	I1206 18:20:19.163489  102554 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1206 18:20:19.164954  102554 out.go:177]   - env NO_PROXY=192.168.58.2
	I1206 18:20:19.166378  102554 cli_runner.go:164] Run: docker network inspect multinode-193731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:20:19.182169  102554 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1206 18:20:19.185755  102554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:20:19.195533  102554 certs.go:56] Setting up /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731 for IP: 192.168.58.3
	I1206 18:20:19.195568  102554 certs.go:190] acquiring lock for shared ca certs: {Name:mk88da27ec99c860f0c2ad3f4fab21b90cf40c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:20:19.195706  102554 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key
	I1206 18:20:19.195758  102554 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key
	I1206 18:20:19.195777  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 18:20:19.195799  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 18:20:19.195819  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 18:20:19.195838  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 18:20:19.195899  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem (1338 bytes)
	W1206 18:20:19.195942  102554 certs.go:433] ignoring /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346_empty.pem, impossibly tiny 0 bytes
	I1206 18:20:19.195959  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 18:20:19.195995  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem (1078 bytes)
	I1206 18:20:19.196031  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem (1123 bytes)
	I1206 18:20:19.196067  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem (1675 bytes)
	I1206 18:20:19.196133  102554 certs.go:437] found cert: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:20:19.196172  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:20:19.196193  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem -> /usr/share/ca-certificates/16346.pem
	I1206 18:20:19.196212  102554 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> /usr/share/ca-certificates/163462.pem
	I1206 18:20:19.196572  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 18:20:19.218165  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1206 18:20:19.239146  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 18:20:19.259697  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 18:20:19.280435  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 18:20:19.301227  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/16346.pem --> /usr/share/ca-certificates/16346.pem (1338 bytes)
	I1206 18:20:19.322000  102554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /usr/share/ca-certificates/163462.pem (1708 bytes)
	I1206 18:20:19.342898  102554 ssh_runner.go:195] Run: openssl version
	I1206 18:20:19.347508  102554 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1206 18:20:19.347703  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163462.pem && ln -fs /usr/share/ca-certificates/163462.pem /etc/ssl/certs/163462.pem"
	I1206 18:20:19.355923  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163462.pem
	I1206 18:20:19.359030  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 18:06 /usr/share/ca-certificates/163462.pem
	I1206 18:20:19.359059  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  6 18:06 /usr/share/ca-certificates/163462.pem
	I1206 18:20:19.359098  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163462.pem
	I1206 18:20:19.365124  102554 command_runner.go:130] > 3ec20f2e
	I1206 18:20:19.365186  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163462.pem /etc/ssl/certs/3ec20f2e.0"
	I1206 18:20:19.373585  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1206 18:20:19.381517  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:20:19.384495  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:20:19.384576  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  6 18:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:20:19.384631  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 18:20:19.390434  102554 command_runner.go:130] > b5213941
	I1206 18:20:19.390660  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1206 18:20:19.398980  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16346.pem && ln -fs /usr/share/ca-certificates/16346.pem /etc/ssl/certs/16346.pem"
	I1206 18:20:19.407176  102554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16346.pem
	I1206 18:20:19.410132  102554 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 18:06 /usr/share/ca-certificates/16346.pem
	I1206 18:20:19.410176  102554 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  6 18:06 /usr/share/ca-certificates/16346.pem
	I1206 18:20:19.410208  102554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16346.pem
	I1206 18:20:19.416004  102554 command_runner.go:130] > 51391683
	I1206 18:20:19.416199  102554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16346.pem /etc/ssl/certs/51391683.0"
	I1206 18:20:19.424308  102554 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1206 18:20:19.427116  102554 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:20:19.427154  102554 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1206 18:20:19.427241  102554 ssh_runner.go:195] Run: crio config
	I1206 18:20:19.461875  102554 command_runner.go:130] ! time="2023-12-06 18:20:19.461445242Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1206 18:20:19.461904  102554 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 18:20:19.467675  102554 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 18:20:19.467699  102554 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 18:20:19.467707  102554 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 18:20:19.467711  102554 command_runner.go:130] > #
	I1206 18:20:19.467718  102554 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 18:20:19.467727  102554 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 18:20:19.467733  102554 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 18:20:19.467740  102554 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 18:20:19.467743  102554 command_runner.go:130] > # reload'.
	I1206 18:20:19.467750  102554 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 18:20:19.467756  102554 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 18:20:19.467763  102554 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 18:20:19.467772  102554 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 18:20:19.467779  102554 command_runner.go:130] > [crio]
	I1206 18:20:19.467785  102554 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 18:20:19.467797  102554 command_runner.go:130] > # containers images, in this directory.
	I1206 18:20:19.467806  102554 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 18:20:19.467814  102554 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 18:20:19.467822  102554 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1206 18:20:19.467828  102554 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 18:20:19.467836  102554 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 18:20:19.467844  102554 command_runner.go:130] > # storage_driver = "vfs"
	I1206 18:20:19.467851  102554 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 18:20:19.467857  102554 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 18:20:19.467864  102554 command_runner.go:130] > # storage_option = [
	I1206 18:20:19.467867  102554 command_runner.go:130] > # ]
	I1206 18:20:19.467876  102554 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 18:20:19.467884  102554 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 18:20:19.467892  102554 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 18:20:19.467900  102554 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 18:20:19.467908  102554 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 18:20:19.467915  102554 command_runner.go:130] > # always happen on a node reboot
	I1206 18:20:19.467920  102554 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 18:20:19.467928  102554 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 18:20:19.467935  102554 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 18:20:19.467944  102554 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 18:20:19.467951  102554 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1206 18:20:19.467961  102554 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 18:20:19.467973  102554 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 18:20:19.467979  102554 command_runner.go:130] > # internal_wipe = true
	I1206 18:20:19.467985  102554 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 18:20:19.467993  102554 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 18:20:19.468001  102554 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 18:20:19.468009  102554 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 18:20:19.468018  102554 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 18:20:19.468024  102554 command_runner.go:130] > [crio.api]
	I1206 18:20:19.468030  102554 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 18:20:19.468037  102554 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 18:20:19.468042  102554 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 18:20:19.468049  102554 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 18:20:19.468056  102554 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 18:20:19.468063  102554 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 18:20:19.468070  102554 command_runner.go:130] > # stream_port = "0"
	I1206 18:20:19.468075  102554 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 18:20:19.468082  102554 command_runner.go:130] > # stream_enable_tls = false
	I1206 18:20:19.468088  102554 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 18:20:19.468095  102554 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 18:20:19.468101  102554 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 18:20:19.468109  102554 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1206 18:20:19.468114  102554 command_runner.go:130] > # minutes.
	I1206 18:20:19.468119  102554 command_runner.go:130] > # stream_tls_cert = ""
	I1206 18:20:19.468127  102554 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 18:20:19.468136  102554 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1206 18:20:19.468142  102554 command_runner.go:130] > # stream_tls_key = ""
	I1206 18:20:19.468148  102554 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 18:20:19.468156  102554 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 18:20:19.468164  102554 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1206 18:20:19.468168  102554 command_runner.go:130] > # stream_tls_ca = ""
	I1206 18:20:19.468176  102554 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 18:20:19.468183  102554 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 18:20:19.468190  102554 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1206 18:20:19.468197  102554 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 18:20:19.468212  102554 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 18:20:19.468223  102554 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 18:20:19.468228  102554 command_runner.go:130] > [crio.runtime]
	I1206 18:20:19.468233  102554 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 18:20:19.468241  102554 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 18:20:19.468247  102554 command_runner.go:130] > # "nofile=1024:2048"
	I1206 18:20:19.468253  102554 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 18:20:19.468260  102554 command_runner.go:130] > # default_ulimits = [
	I1206 18:20:19.468263  102554 command_runner.go:130] > # ]
	I1206 18:20:19.468292  102554 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 18:20:19.468299  102554 command_runner.go:130] > # no_pivot = false
	I1206 18:20:19.468307  102554 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 18:20:19.468315  102554 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 18:20:19.468323  102554 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 18:20:19.468334  102554 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 18:20:19.468346  102554 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 18:20:19.468362  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 18:20:19.468372  102554 command_runner.go:130] > # conmon = ""
	I1206 18:20:19.468384  102554 command_runner.go:130] > # Cgroup setting for conmon
	I1206 18:20:19.468400  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 18:20:19.468411  102554 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 18:20:19.468423  102554 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 18:20:19.468436  102554 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 18:20:19.468451  102554 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 18:20:19.468461  102554 command_runner.go:130] > # conmon_env = [
	I1206 18:20:19.468470  102554 command_runner.go:130] > # ]
	I1206 18:20:19.468480  102554 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 18:20:19.468493  102554 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 18:20:19.468506  102554 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 18:20:19.468516  102554 command_runner.go:130] > # default_env = [
	I1206 18:20:19.468526  102554 command_runner.go:130] > # ]
	I1206 18:20:19.468536  102554 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 18:20:19.468546  102554 command_runner.go:130] > # selinux = false
	I1206 18:20:19.468562  102554 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 18:20:19.468577  102554 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1206 18:20:19.468590  102554 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1206 18:20:19.468601  102554 command_runner.go:130] > # seccomp_profile = ""
	I1206 18:20:19.468612  102554 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1206 18:20:19.468626  102554 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1206 18:20:19.468641  102554 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1206 18:20:19.468653  102554 command_runner.go:130] > # which might increase security.
	I1206 18:20:19.468672  102554 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1206 18:20:19.468686  102554 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 18:20:19.468700  102554 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 18:20:19.468709  102554 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 18:20:19.468720  102554 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 18:20:19.468734  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:20:19.468743  102554 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 18:20:19.468754  102554 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 18:20:19.468761  102554 command_runner.go:130] > # the cgroup blockio controller.
	I1206 18:20:19.468767  102554 command_runner.go:130] > # blockio_config_file = ""
	I1206 18:20:19.468781  102554 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 18:20:19.468792  102554 command_runner.go:130] > # irqbalance daemon.
	I1206 18:20:19.468804  102554 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 18:20:19.468816  102554 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 18:20:19.468824  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:20:19.468829  102554 command_runner.go:130] > # rdt_config_file = ""
	I1206 18:20:19.468837  102554 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 18:20:19.468841  102554 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 18:20:19.468851  102554 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 18:20:19.468858  102554 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 18:20:19.468864  102554 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 18:20:19.468873  102554 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 18:20:19.468879  102554 command_runner.go:130] > # will be added.
	I1206 18:20:19.468883  102554 command_runner.go:130] > # default_capabilities = [
	I1206 18:20:19.468889  102554 command_runner.go:130] > # 	"CHOWN",
	I1206 18:20:19.468893  102554 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 18:20:19.468899  102554 command_runner.go:130] > # 	"FSETID",
	I1206 18:20:19.468903  102554 command_runner.go:130] > # 	"FOWNER",
	I1206 18:20:19.468909  102554 command_runner.go:130] > # 	"SETGID",
	I1206 18:20:19.468913  102554 command_runner.go:130] > # 	"SETUID",
	I1206 18:20:19.468919  102554 command_runner.go:130] > # 	"SETPCAP",
	I1206 18:20:19.468924  102554 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 18:20:19.468929  102554 command_runner.go:130] > # 	"KILL",
	I1206 18:20:19.468934  102554 command_runner.go:130] > # ]
	I1206 18:20:19.468944  102554 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 18:20:19.468953  102554 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 18:20:19.468960  102554 command_runner.go:130] > # add_inheritable_capabilities = true
	I1206 18:20:19.468966  102554 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 18:20:19.468974  102554 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 18:20:19.468980  102554 command_runner.go:130] > # default_sysctls = [
	I1206 18:20:19.468984  102554 command_runner.go:130] > # ]
	I1206 18:20:19.468989  102554 command_runner.go:130] > # List of devices on the host that a
	I1206 18:20:19.468997  102554 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 18:20:19.469002  102554 command_runner.go:130] > # allowed_devices = [
	I1206 18:20:19.469006  102554 command_runner.go:130] > # 	"/dev/fuse",
	I1206 18:20:19.469011  102554 command_runner.go:130] > # ]
	I1206 18:20:19.469016  102554 command_runner.go:130] > # List of additional devices. specified as
	I1206 18:20:19.469041  102554 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 18:20:19.469049  102554 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 18:20:19.469055  102554 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 18:20:19.469060  102554 command_runner.go:130] > # additional_devices = [
	I1206 18:20:19.469066  102554 command_runner.go:130] > # ]
	I1206 18:20:19.469071  102554 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 18:20:19.469075  102554 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 18:20:19.469081  102554 command_runner.go:130] > # 	"/etc/cdi",
	I1206 18:20:19.469085  102554 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 18:20:19.469091  102554 command_runner.go:130] > # ]
	I1206 18:20:19.469097  102554 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 18:20:19.469105  102554 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 18:20:19.469112  102554 command_runner.go:130] > # Defaults to false.
	I1206 18:20:19.469117  102554 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 18:20:19.469125  102554 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 18:20:19.469133  102554 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 18:20:19.469140  102554 command_runner.go:130] > # hooks_dir = [
	I1206 18:20:19.469145  102554 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 18:20:19.469150  102554 command_runner.go:130] > # ]
	I1206 18:20:19.469156  102554 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 18:20:19.469167  102554 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 18:20:19.469174  102554 command_runner.go:130] > # its default mounts from the following two files:
	I1206 18:20:19.469189  102554 command_runner.go:130] > #
	I1206 18:20:19.469197  102554 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 18:20:19.469206  102554 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 18:20:19.469213  102554 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 18:20:19.469220  102554 command_runner.go:130] > #
	I1206 18:20:19.469226  102554 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 18:20:19.469235  102554 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 18:20:19.469244  102554 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 18:20:19.469251  102554 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 18:20:19.469254  102554 command_runner.go:130] > #
	I1206 18:20:19.469261  102554 command_runner.go:130] > # default_mounts_file = ""
	I1206 18:20:19.469266  102554 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 18:20:19.469275  102554 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 18:20:19.469281  102554 command_runner.go:130] > # pids_limit = 0
	I1206 18:20:19.469287  102554 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 18:20:19.469296  102554 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 18:20:19.469304  102554 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 18:20:19.469314  102554 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 18:20:19.469320  102554 command_runner.go:130] > # log_size_max = -1
	I1206 18:20:19.469327  102554 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1206 18:20:19.469333  102554 command_runner.go:130] > # log_to_journald = false
	I1206 18:20:19.469340  102554 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 18:20:19.469347  102554 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 18:20:19.469352  102554 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 18:20:19.469357  102554 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 18:20:19.469364  102554 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 18:20:19.469371  102554 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 18:20:19.469377  102554 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 18:20:19.469383  102554 command_runner.go:130] > # read_only = false
	I1206 18:20:19.469389  102554 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 18:20:19.469398  102554 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 18:20:19.469404  102554 command_runner.go:130] > # live configuration reload.
	I1206 18:20:19.469408  102554 command_runner.go:130] > # log_level = "info"
	I1206 18:20:19.469416  102554 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 18:20:19.469421  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:20:19.469427  102554 command_runner.go:130] > # log_filter = ""
	I1206 18:20:19.469434  102554 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 18:20:19.469442  102554 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 18:20:19.469449  102554 command_runner.go:130] > # separated by comma.
	I1206 18:20:19.469453  102554 command_runner.go:130] > # uid_mappings = ""
	I1206 18:20:19.469461  102554 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 18:20:19.469469  102554 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 18:20:19.469473  102554 command_runner.go:130] > # separated by comma.
	I1206 18:20:19.469479  102554 command_runner.go:130] > # gid_mappings = ""
	I1206 18:20:19.469485  102554 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 18:20:19.469494  102554 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 18:20:19.469502  102554 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 18:20:19.469508  102554 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 18:20:19.469514  102554 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 18:20:19.469519  102554 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 18:20:19.469526  102554 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 18:20:19.469532  102554 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 18:20:19.469538  102554 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 18:20:19.469546  102554 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 18:20:19.469554  102554 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 18:20:19.469558  102554 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 18:20:19.469566  102554 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 18:20:19.469575  102554 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 18:20:19.469583  102554 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 18:20:19.469589  102554 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 18:20:19.469595  102554 command_runner.go:130] > # drop_infra_ctr = true
	I1206 18:20:19.469601  102554 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 18:20:19.469609  102554 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 18:20:19.469618  102554 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 18:20:19.469625  102554 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 18:20:19.469631  102554 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 18:20:19.469638  102554 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 18:20:19.469642  102554 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 18:20:19.469650  102554 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 18:20:19.469656  102554 command_runner.go:130] > # pinns_path = ""
	I1206 18:20:19.469667  102554 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 18:20:19.469675  102554 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1206 18:20:19.469684  102554 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1206 18:20:19.469691  102554 command_runner.go:130] > # default_runtime = "runc"
	I1206 18:20:19.469698  102554 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 18:20:19.469708  102554 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 18:20:19.469719  102554 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1206 18:20:19.469727  102554 command_runner.go:130] > # creation as a file is not desired either.
	I1206 18:20:19.469735  102554 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 18:20:19.469742  102554 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 18:20:19.469747  102554 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 18:20:19.469752  102554 command_runner.go:130] > # ]
	I1206 18:20:19.469760  102554 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 18:20:19.469769  102554 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 18:20:19.469777  102554 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1206 18:20:19.469786  102554 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1206 18:20:19.469792  102554 command_runner.go:130] > #
	I1206 18:20:19.469797  102554 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1206 18:20:19.469803  102554 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1206 18:20:19.469814  102554 command_runner.go:130] > #  runtime_type = "oci"
	I1206 18:20:19.469821  102554 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1206 18:20:19.469826  102554 command_runner.go:130] > #  privileged_without_host_devices = false
	I1206 18:20:19.469833  102554 command_runner.go:130] > #  allowed_annotations = []
	I1206 18:20:19.469837  102554 command_runner.go:130] > # Where:
	I1206 18:20:19.469844  102554 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1206 18:20:19.469853  102554 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1206 18:20:19.469861  102554 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 18:20:19.469869  102554 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 18:20:19.469875  102554 command_runner.go:130] > #   in $PATH.
	I1206 18:20:19.469882  102554 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1206 18:20:19.469889  102554 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 18:20:19.469895  102554 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1206 18:20:19.469901  102554 command_runner.go:130] > #   state.
	I1206 18:20:19.469907  102554 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 18:20:19.469915  102554 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 18:20:19.469922  102554 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 18:20:19.469930  102554 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 18:20:19.469936  102554 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 18:20:19.469946  102554 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 18:20:19.469953  102554 command_runner.go:130] > #   The currently recognized values are:
	I1206 18:20:19.469960  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 18:20:19.469969  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 18:20:19.469977  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 18:20:19.469983  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 18:20:19.469993  102554 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 18:20:19.470001  102554 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 18:20:19.470009  102554 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 18:20:19.470018  102554 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1206 18:20:19.470025  102554 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 18:20:19.470030  102554 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 18:20:19.470037  102554 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1206 18:20:19.470041  102554 command_runner.go:130] > runtime_type = "oci"
	I1206 18:20:19.470048  102554 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 18:20:19.470052  102554 command_runner.go:130] > runtime_config_path = ""
	I1206 18:20:19.470058  102554 command_runner.go:130] > monitor_path = ""
	I1206 18:20:19.470062  102554 command_runner.go:130] > monitor_cgroup = ""
	I1206 18:20:19.470066  102554 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 18:20:19.470093  102554 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1206 18:20:19.470100  102554 command_runner.go:130] > # running containers
	I1206 18:20:19.470105  102554 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1206 18:20:19.470114  102554 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1206 18:20:19.470122  102554 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1206 18:20:19.470130  102554 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1206 18:20:19.470138  102554 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1206 18:20:19.470145  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1206 18:20:19.470150  102554 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1206 18:20:19.470156  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1206 18:20:19.470163  102554 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1206 18:20:19.470168  102554 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1206 18:20:19.470177  102554 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 18:20:19.470184  102554 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 18:20:19.470192  102554 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 18:20:19.470201  102554 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 18:20:19.470210  102554 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1206 18:20:19.470220  102554 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 18:20:19.470231  102554 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 18:20:19.470240  102554 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 18:20:19.470248  102554 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 18:20:19.470256  102554 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 18:20:19.470262  102554 command_runner.go:130] > # Example:
	I1206 18:20:19.470267  102554 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 18:20:19.470274  102554 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 18:20:19.470281  102554 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 18:20:19.470287  102554 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 18:20:19.470293  102554 command_runner.go:130] > # cpuset = 0
	I1206 18:20:19.470297  102554 command_runner.go:130] > # cpushares = "0-1"
	I1206 18:20:19.470303  102554 command_runner.go:130] > # Where:
	I1206 18:20:19.470309  102554 command_runner.go:130] > # The workload name is workload-type.
	I1206 18:20:19.470318  102554 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 18:20:19.470325  102554 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 18:20:19.470333  102554 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 18:20:19.470341  102554 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 18:20:19.470349  102554 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 18:20:19.470354  102554 command_runner.go:130] > # 
	I1206 18:20:19.470361  102554 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 18:20:19.470366  102554 command_runner.go:130] > #
	I1206 18:20:19.470372  102554 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 18:20:19.470380  102554 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1206 18:20:19.470389  102554 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1206 18:20:19.470395  102554 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1206 18:20:19.470402  102554 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1206 18:20:19.470409  102554 command_runner.go:130] > [crio.image]
	I1206 18:20:19.470415  102554 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 18:20:19.470422  102554 command_runner.go:130] > # default_transport = "docker://"
	I1206 18:20:19.470428  102554 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 18:20:19.470436  102554 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 18:20:19.470442  102554 command_runner.go:130] > # global_auth_file = ""
	I1206 18:20:19.470448  102554 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 18:20:19.470455  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:20:19.470462  102554 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1206 18:20:19.470469  102554 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 18:20:19.470478  102554 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 18:20:19.470483  102554 command_runner.go:130] > # This option supports live configuration reload.
	I1206 18:20:19.470490  102554 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 18:20:19.470496  102554 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 18:20:19.470504  102554 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 18:20:19.470512  102554 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 18:20:19.470518  102554 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 18:20:19.470524  102554 command_runner.go:130] > # pause_command = "/pause"
	I1206 18:20:19.470530  102554 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 18:20:19.470539  102554 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 18:20:19.470547  102554 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 18:20:19.470554  102554 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 18:20:19.470563  102554 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 18:20:19.470567  102554 command_runner.go:130] > # signature_policy = ""
	I1206 18:20:19.470579  102554 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 18:20:19.470587  102554 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 18:20:19.470593  102554 command_runner.go:130] > # changing them here.
	I1206 18:20:19.470598  102554 command_runner.go:130] > # insecure_registries = [
	I1206 18:20:19.470603  102554 command_runner.go:130] > # ]
	I1206 18:20:19.470610  102554 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 18:20:19.470617  102554 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 18:20:19.470625  102554 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 18:20:19.470630  102554 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 18:20:19.470636  102554 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 18:20:19.470643  102554 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 18:20:19.470649  102554 command_runner.go:130] > # CNI plugins.
	I1206 18:20:19.470653  102554 command_runner.go:130] > [crio.network]
	I1206 18:20:19.470659  102554 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 18:20:19.470670  102554 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 18:20:19.470676  102554 command_runner.go:130] > # cni_default_network = ""
	I1206 18:20:19.470682  102554 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 18:20:19.470689  102554 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 18:20:19.470695  102554 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 18:20:19.470701  102554 command_runner.go:130] > # plugin_dirs = [
	I1206 18:20:19.470705  102554 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 18:20:19.470711  102554 command_runner.go:130] > # ]
	I1206 18:20:19.470717  102554 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 18:20:19.470724  102554 command_runner.go:130] > [crio.metrics]
	I1206 18:20:19.470729  102554 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 18:20:19.470735  102554 command_runner.go:130] > # enable_metrics = false
	I1206 18:20:19.470740  102554 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 18:20:19.470747  102554 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 18:20:19.470753  102554 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 18:20:19.470762  102554 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 18:20:19.470769  102554 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 18:20:19.470775  102554 command_runner.go:130] > # metrics_collectors = [
	I1206 18:20:19.470779  102554 command_runner.go:130] > # 	"operations",
	I1206 18:20:19.470786  102554 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1206 18:20:19.470791  102554 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1206 18:20:19.470797  102554 command_runner.go:130] > # 	"operations_errors",
	I1206 18:20:19.470802  102554 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1206 18:20:19.470808  102554 command_runner.go:130] > # 	"image_pulls_by_name",
	I1206 18:20:19.470813  102554 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1206 18:20:19.470819  102554 command_runner.go:130] > # 	"image_pulls_failures",
	I1206 18:20:19.470823  102554 command_runner.go:130] > # 	"image_pulls_successes",
	I1206 18:20:19.470827  102554 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 18:20:19.470834  102554 command_runner.go:130] > # 	"image_layer_reuse",
	I1206 18:20:19.470838  102554 command_runner.go:130] > # 	"containers_oom_total",
	I1206 18:20:19.470844  102554 command_runner.go:130] > # 	"containers_oom",
	I1206 18:20:19.470849  102554 command_runner.go:130] > # 	"processes_defunct",
	I1206 18:20:19.470855  102554 command_runner.go:130] > # 	"operations_total",
	I1206 18:20:19.470859  102554 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 18:20:19.470866  102554 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 18:20:19.470870  102554 command_runner.go:130] > # 	"operations_errors_total",
	I1206 18:20:19.470877  102554 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 18:20:19.470882  102554 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 18:20:19.470888  102554 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 18:20:19.470893  102554 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 18:20:19.470899  102554 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 18:20:19.470904  102554 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 18:20:19.470909  102554 command_runner.go:130] > # ]
	I1206 18:20:19.470915  102554 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 18:20:19.470921  102554 command_runner.go:130] > # metrics_port = 9090
	I1206 18:20:19.470926  102554 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 18:20:19.470932  102554 command_runner.go:130] > # metrics_socket = ""
	I1206 18:20:19.470938  102554 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 18:20:19.470945  102554 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 18:20:19.470954  102554 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 18:20:19.470960  102554 command_runner.go:130] > # certificate on any modification event.
	I1206 18:20:19.470965  102554 command_runner.go:130] > # metrics_cert = ""
	I1206 18:20:19.470972  102554 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 18:20:19.470978  102554 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 18:20:19.470983  102554 command_runner.go:130] > # metrics_key = ""
	I1206 18:20:19.470988  102554 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 18:20:19.470994  102554 command_runner.go:130] > [crio.tracing]
	I1206 18:20:19.471000  102554 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 18:20:19.471006  102554 command_runner.go:130] > # enable_tracing = false
	I1206 18:20:19.471012  102554 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 18:20:19.471018  102554 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1206 18:20:19.471023  102554 command_runner.go:130] > # Number of samples to collect per million spans.
	I1206 18:20:19.471031  102554 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 18:20:19.471037  102554 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 18:20:19.471042  102554 command_runner.go:130] > [crio.stats]
	I1206 18:20:19.471048  102554 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 18:20:19.471056  102554 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 18:20:19.471060  102554 command_runner.go:130] > # stats_collection_period = 0
	I1206 18:20:19.471122  102554 cni.go:84] Creating CNI manager for ""
	I1206 18:20:19.471131  102554 cni.go:136] 2 nodes found, recommending kindnet
	I1206 18:20:19.471139  102554 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:20:19.471157  102554 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-193731 NodeName:multinode-193731-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:20:19.471264  102554 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-193731-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:20:19.471314  102554 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-193731-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:20:19.471363  102554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1206 18:20:19.478894  102554 command_runner.go:130] > kubeadm
	I1206 18:20:19.478917  102554 command_runner.go:130] > kubectl
	I1206 18:20:19.478923  102554 command_runner.go:130] > kubelet
	I1206 18:20:19.479487  102554 binaries.go:44] Found k8s binaries, skipping transfer
	I1206 18:20:19.479547  102554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1206 18:20:19.487090  102554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1206 18:20:19.502500  102554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 18:20:19.518331  102554 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1206 18:20:19.521361  102554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:20:19.531075  102554 host.go:66] Checking if "multinode-193731" exists ...
	I1206 18:20:19.531354  102554 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:20:19.531345  102554 start.go:304] JoinCluster: &{Name:multinode-193731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-193731 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:20:19.531446  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1206 18:20:19.531496  102554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:20:19.547400  102554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:20:19.683374  102554 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dvphxk.gwuox8yw30bcdxx0 --discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 
	I1206 18:20:19.688484  102554 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 18:20:19.688525  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvphxk.gwuox8yw30bcdxx0 --discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-193731-m02"
	I1206 18:20:19.721561  102554 command_runner.go:130] ! W1206 18:20:19.721075    1116 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1206 18:20:19.748441  102554 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1206 18:20:19.815143  102554 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 18:20:21.946419  102554 command_runner.go:130] > [preflight] Running pre-flight checks
	I1206 18:20:21.946450  102554 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1206 18:20:21.946458  102554 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1206 18:20:21.946462  102554 command_runner.go:130] > OS: Linux
	I1206 18:20:21.946467  102554 command_runner.go:130] > CGROUPS_CPU: enabled
	I1206 18:20:21.946473  102554 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1206 18:20:21.946477  102554 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1206 18:20:21.946482  102554 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1206 18:20:21.946489  102554 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1206 18:20:21.946497  102554 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1206 18:20:21.946508  102554 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1206 18:20:21.946522  102554 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1206 18:20:21.946531  102554 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1206 18:20:21.946547  102554 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1206 18:20:21.946557  102554 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1206 18:20:21.946566  102554 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 18:20:21.946575  102554 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 18:20:21.946582  102554 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1206 18:20:21.946593  102554 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1206 18:20:21.946605  102554 command_runner.go:130] > This node has joined the cluster:
	I1206 18:20:21.946616  102554 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1206 18:20:21.946630  102554 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1206 18:20:21.946644  102554 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1206 18:20:21.946672  102554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dvphxk.gwuox8yw30bcdxx0 --discovery-token-ca-cert-hash sha256:3c80b8d99c0fef62dd64a51f38dcb8ba9aab73688dcbf8005afca2b0a6fcf611 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-193731-m02": (2.25812871s)
	I1206 18:20:21.946714  102554 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1206 18:20:22.103331  102554 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1206 18:20:22.103421  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3 minikube.k8s.io/name=multinode-193731 minikube.k8s.io/updated_at=2023_12_06T18_20_22_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 18:20:22.174700  102554 command_runner.go:130] > node/multinode-193731-m02 labeled
	I1206 18:20:22.177674  102554 start.go:306] JoinCluster complete in 2.64631682s
	I1206 18:20:22.177702  102554 cni.go:84] Creating CNI manager for ""
	I1206 18:20:22.177709  102554 cni.go:136] 2 nodes found, recommending kindnet
	I1206 18:20:22.177773  102554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 18:20:22.181439  102554 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1206 18:20:22.181471  102554 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1206 18:20:22.181482  102554 command_runner.go:130] > Device: 37h/55d	Inode: 547375      Links: 1
	I1206 18:20:22.181493  102554 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 18:20:22.181503  102554 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1206 18:20:22.181510  102554 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1206 18:20:22.181521  102554 command_runner.go:130] > Change: 2023-12-06 18:00:34.126507801 +0000
	I1206 18:20:22.181536  102554 command_runner.go:130] >  Birth: 2023-12-06 18:00:34.102506143 +0000
	I1206 18:20:22.181605  102554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1206 18:20:22.181620  102554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1206 18:20:22.198673  102554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 18:20:22.437436  102554 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1206 18:20:22.437483  102554 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1206 18:20:22.437493  102554 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1206 18:20:22.437501  102554 command_runner.go:130] > daemonset.apps/kindnet configured
	I1206 18:20:22.437841  102554 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:20:22.438040  102554 kapi.go:59] client config for multinode-193731: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:20:22.438306  102554 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1206 18:20:22.438319  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:22.438326  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:22.438332  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:22.440319  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:22.440339  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:22.440345  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:22 GMT
	I1206 18:20:22.440351  102554 round_trippers.go:580]     Audit-Id: f86ea6b4-d6f6-4601-ae22-6167fca906ff
	I1206 18:20:22.440356  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:22.440361  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:22.440367  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:22.440372  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:22.440380  102554 round_trippers.go:580]     Content-Length: 291
	I1206 18:20:22.440399  102554 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b5e38ad4-b7b3-450e-bec9-3b49e7e61e29","resourceVersion":"443","creationTimestamp":"2023-12-06T18:19:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1206 18:20:22.440483  102554 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-193731" context rescaled to 1 replicas
	I1206 18:20:22.440511  102554 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1206 18:20:22.443921  102554 out.go:177] * Verifying Kubernetes components...
	I1206 18:20:22.445412  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:20:22.456512  102554 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:20:22.456821  102554 kapi.go:59] client config for multinode-193731: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.crt", KeyFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/profiles/multinode-193731/client.key", CAFile:"/home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 18:20:22.457103  102554 node_ready.go:35] waiting up to 6m0s for node "multinode-193731-m02" to be "Ready" ...
	I1206 18:20:22.457182  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:22.457191  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:22.457199  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:22.457208  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:22.459530  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:22.459553  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:22.459563  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:22.459571  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:22.459579  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:22.459588  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:22 GMT
	I1206 18:20:22.459600  102554 round_trippers.go:580]     Audit-Id: 3e350c56-6fef-4143-addb-03e00dd5c97b
	I1206 18:20:22.459611  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:22.459733  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"479","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1206 18:20:22.460072  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:22.460086  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:22.460097  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:22.460106  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:22.461994  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:22.462016  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:22.462024  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:22 GMT
	I1206 18:20:22.462032  102554 round_trippers.go:580]     Audit-Id: 9edb9f2f-700a-4fce-bb26-21d12e9e8ce1
	I1206 18:20:22.462040  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:22.462048  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:22.462060  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:22.462069  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:22.462185  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"479","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1206 18:20:22.963314  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:22.963344  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:22.963356  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:22.963365  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:22.966029  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:22.966059  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:22.966068  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:22.966074  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:22 GMT
	I1206 18:20:22.966080  102554 round_trippers.go:580]     Audit-Id: 30aac6a8-b99e-4046-aaa9-88e8c46e80fa
	I1206 18:20:22.966085  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:22.966091  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:22.966096  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:22.966247  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"479","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1206 18:20:23.462732  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:23.462756  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:23.462764  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:23.462770  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:23.465205  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:23.465229  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:23.465235  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:23.465241  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:23.465246  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:23.465254  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:23 GMT
	I1206 18:20:23.465261  102554 round_trippers.go:580]     Audit-Id: fc7dfc5f-f376-47fb-a672-1329870b7c3a
	I1206 18:20:23.465269  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:23.465375  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"479","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1206 18:20:23.962772  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:23.962798  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:23.962806  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:23.962812  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:23.965170  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:23.965192  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:23.965199  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:23 GMT
	I1206 18:20:23.965204  102554 round_trippers.go:580]     Audit-Id: 39d73d48-be9e-4b51-a5d1-9eb151c5b9d6
	I1206 18:20:23.965212  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:23.965220  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:23.965230  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:23.965242  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:23.965370  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"479","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1206 18:20:24.463165  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:24.463188  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.463197  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.463202  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.465597  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:24.465625  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.465632  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.465637  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.465645  102554 round_trippers.go:580]     Audit-Id: a6934799-fcb8-441e-82d1-37fd5867cf1e
	I1206 18:20:24.465654  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.465661  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.465669  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.465848  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"499","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1206 18:20:24.466242  102554 node_ready.go:49] node "multinode-193731-m02" has status "Ready":"True"
	I1206 18:20:24.466261  102554 node_ready.go:38] duration metric: took 2.009139507s waiting for node "multinode-193731-m02" to be "Ready" ...
	I1206 18:20:24.466276  102554 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:20:24.466355  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1206 18:20:24.466367  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.466377  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.466387  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.469340  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:24.469362  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.469369  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.469375  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.469380  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.469386  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.469395  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.469400  102554 round_trippers.go:580]     Audit-Id: 1afbfdb1-70e6-4452-86ed-cf0fc57864f0
	I1206 18:20:24.469944  102554 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"503"},"items":[{"metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"439","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1206 18:20:24.471941  102554 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.472008  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-8t8qq
	I1206 18:20:24.472016  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.472023  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.472029  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.473974  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.473997  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.474006  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.474014  102554 round_trippers.go:580]     Audit-Id: e55bfef9-f453-44d8-8048-8f6437023d32
	I1206 18:20:24.474023  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.474033  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.474039  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.474046  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.474151  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-8t8qq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b3765e1c-caa3-48e6-b18b-d1eec4d40452","resourceVersion":"439","creationTimestamp":"2023-12-06T18:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"74219fc5-7250-4f8b-b71a-86d0e2618ff1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"74219fc5-7250-4f8b-b71a-86d0e2618ff1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1206 18:20:24.474549  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:24.474560  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.474566  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.474572  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.476281  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.476296  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.476303  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.476308  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.476313  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.476318  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.476323  102554 round_trippers.go:580]     Audit-Id: cfe74cd3-58db-4941-a968-36610706345d
	I1206 18:20:24.476328  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.476499  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:24.476795  102554 pod_ready.go:92] pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:24.476812  102554 pod_ready.go:81] duration metric: took 4.852343ms waiting for pod "coredns-5dd5756b68-8t8qq" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.476820  102554 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.476864  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-193731
	I1206 18:20:24.476872  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.476879  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.476885  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.478479  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.478494  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.478500  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.478506  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.478514  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.478522  102554 round_trippers.go:580]     Audit-Id: 32e383da-f344-49a5-9953-5f7b155c0fe3
	I1206 18:20:24.478532  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.478543  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.478630  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-193731","namespace":"kube-system","uid":"57fe8b0e-15d1-4fb5-9c5e-d3831f895fcb","resourceVersion":"321","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e89b11cad76127f5960df69b9190cfbe","kubernetes.io/config.mirror":"e89b11cad76127f5960df69b9190cfbe","kubernetes.io/config.seen":"2023-12-06T18:19:21.802918804Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1206 18:20:24.478963  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:24.478978  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.478985  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.478991  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.480587  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.480603  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.480610  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.480615  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.480626  102554 round_trippers.go:580]     Audit-Id: ad7a9036-2ca1-4743-a3b6-4c8134c832d5
	I1206 18:20:24.480632  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.480639  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.480647  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.480809  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:24.481079  102554 pod_ready.go:92] pod "etcd-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:24.481093  102554 pod_ready.go:81] duration metric: took 4.268524ms waiting for pod "etcd-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.481106  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.481151  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-193731
	I1206 18:20:24.481158  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.481164  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.481171  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.482732  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.482752  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.482761  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.482768  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.482776  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.482783  102554 round_trippers.go:580]     Audit-Id: 7e76bfc3-0a8a-4b62-802a-26730d498444
	I1206 18:20:24.482792  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.482802  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.482888  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-193731","namespace":"kube-system","uid":"0a8201e6-4f4c-40f5-855d-4e80f2c90ac3","resourceVersion":"289","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"2262c72455a75ed17e147e54641ca32e","kubernetes.io/config.mirror":"2262c72455a75ed17e147e54641ca32e","kubernetes.io/config.seen":"2023-12-06T18:19:21.802924979Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1206 18:20:24.483248  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:24.483262  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.483272  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.483280  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.484694  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.484708  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.484717  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.484725  102554 round_trippers.go:580]     Audit-Id: b110ef49-d4cd-4cfc-9eab-f9305bb8574a
	I1206 18:20:24.484733  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.484741  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.484753  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.484763  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.484896  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:24.485163  102554 pod_ready.go:92] pod "kube-apiserver-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:24.485176  102554 pod_ready.go:81] duration metric: took 4.060006ms waiting for pod "kube-apiserver-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.485184  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.485221  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-193731
	I1206 18:20:24.485229  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.485235  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.485241  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.486896  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.486915  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.486925  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.486933  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.486940  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.486956  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.486964  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.486975  102554 round_trippers.go:580]     Audit-Id: 71c4c40d-443c-4f4c-aa63-99603571ee9e
	I1206 18:20:24.487101  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-193731","namespace":"kube-system","uid":"f7525d0f-d8fd-4494-bfaa-9887b29c993f","resourceVersion":"294","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"beed0bbae2db36b2912cd72c43112ba8","kubernetes.io/config.mirror":"beed0bbae2db36b2912cd72c43112ba8","kubernetes.io/config.seen":"2023-12-06T18:19:21.802926336Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1206 18:20:24.487442  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:24.487454  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.487462  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.487470  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.488947  102554 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1206 18:20:24.488962  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.488968  102554 round_trippers.go:580]     Audit-Id: 5177578b-85c4-40fb-861c-4daf47b9475e
	I1206 18:20:24.488973  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.488978  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.488984  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.488989  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.488994  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.489095  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:24.489348  102554 pod_ready.go:92] pod "kube-controller-manager-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:24.489360  102554 pod_ready.go:81] duration metric: took 4.17063ms waiting for pod "kube-controller-manager-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.489368  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cr5kr" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.663742  102554 request.go:629] Waited for 174.320939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr5kr
	I1206 18:20:24.663815  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr5kr
	I1206 18:20:24.663824  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.663831  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.663840  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.666216  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:24.666241  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.666248  102554 round_trippers.go:580]     Audit-Id: deea2d18-f1c3-4f3d-a2e2-1d10c22c3eb8
	I1206 18:20:24.666255  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.666263  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.666271  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.666280  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.666288  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.666417  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cr5kr","generateName":"kube-proxy-","namespace":"kube-system","uid":"59b1e353-df7a-4a57-bada-e3d619ecf8eb","resourceVersion":"502","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1206 18:20:24.863161  102554 request.go:629] Waited for 196.280183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:24.863253  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731-m02
	I1206 18:20:24.863258  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:24.863266  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:24.863272  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:24.865510  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:24.865529  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:24.865536  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:24.865542  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:24.865547  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:24.865552  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:24.865557  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:24 GMT
	I1206 18:20:24.865562  102554 round_trippers.go:580]     Audit-Id: 62fabc85-a39d-4340-a1c5-ad3a4967e2d6
	I1206 18:20:24.865670  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731-m02","uid":"6fa77089-6bcc-4ac9-8eb1-747d5cd83b33","resourceVersion":"499","creationTimestamp":"2023-12-06T18:20:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_06T18_20_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:20:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1206 18:20:24.865976  102554 pod_ready.go:92] pod "kube-proxy-cr5kr" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:24.865991  102554 pod_ready.go:81] duration metric: took 376.618901ms waiting for pod "kube-proxy-cr5kr" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:24.866001  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbznd" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:25.063268  102554 request.go:629] Waited for 197.208898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbznd
	I1206 18:20:25.063332  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbznd
	I1206 18:20:25.063339  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:25.063347  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:25.063361  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:25.065557  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:25.065580  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:25.065590  102554 round_trippers.go:580]     Audit-Id: 0cf37243-a34b-496d-b2d9-67af0a53b21d
	I1206 18:20:25.065599  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:25.065608  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:25.065626  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:25.065637  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:25.065645  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:25 GMT
	I1206 18:20:25.065749  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbznd","generateName":"kube-proxy-","namespace":"kube-system","uid":"5400eb49-6ef8-4329-9b5a-799dceda044a","resourceVersion":"407","creationTimestamp":"2023-12-06T18:19:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f8c60d80-76c9-4a9c-b4f1-b3496072f0cb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1206 18:20:25.263575  102554 request.go:629] Waited for 197.350624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:25.263645  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:25.263661  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:25.263669  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:25.263676  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:25.266210  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:25.266236  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:25.266247  102554 round_trippers.go:580]     Audit-Id: 3a49d229-1b03-4a5f-acdf-94694d90b248
	I1206 18:20:25.266256  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:25.266265  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:25.266271  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:25.266279  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:25.266295  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:25 GMT
	I1206 18:20:25.266421  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:25.266806  102554 pod_ready.go:92] pod "kube-proxy-tbznd" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:25.266825  102554 pod_ready.go:81] duration metric: took 400.81878ms waiting for pod "kube-proxy-tbznd" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:25.266836  102554 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:25.463965  102554 request.go:629] Waited for 197.060054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-193731
	I1206 18:20:25.464051  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-193731
	I1206 18:20:25.464067  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:25.464076  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:25.464082  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:25.466483  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:25.466507  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:25.466515  102554 round_trippers.go:580]     Audit-Id: b12db52b-09c6-4823-93f3-95ed9a2dcd40
	I1206 18:20:25.466521  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:25.466527  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:25.466532  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:25.466541  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:25.466547  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:25 GMT
	I1206 18:20:25.466659  102554 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-193731","namespace":"kube-system","uid":"a64c0992-f8c6-4baf-b702-d3209993bff4","resourceVersion":"293","creationTimestamp":"2023-12-06T18:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1494cd8bca68c3af3dc9054b9947349f","kubernetes.io/config.mirror":"1494cd8bca68c3af3dc9054b9947349f","kubernetes.io/config.seen":"2023-12-06T18:19:21.802927564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-06T18:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1206 18:20:25.663319  102554 request.go:629] Waited for 196.29421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:25.663415  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-193731
	I1206 18:20:25.663422  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:25.663434  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:25.663445  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:25.665853  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:25.665881  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:25.665890  102554 round_trippers.go:580]     Audit-Id: e83defb3-a318-4197-89f6-dfeaa3faa8a1
	I1206 18:20:25.665896  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:25.665902  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:25.665906  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:25.665912  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:25.665917  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:25 GMT
	I1206 18:20:25.666000  102554 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-06T18:19:19Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1206 18:20:25.666323  102554 pod_ready.go:92] pod "kube-scheduler-multinode-193731" in "kube-system" namespace has status "Ready":"True"
	I1206 18:20:25.666339  102554 pod_ready.go:81] duration metric: took 399.495993ms waiting for pod "kube-scheduler-multinode-193731" in "kube-system" namespace to be "Ready" ...
	I1206 18:20:25.666349  102554 pod_ready.go:38] duration metric: took 1.200056382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1206 18:20:25.666367  102554 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 18:20:25.666409  102554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:20:25.677226  102554 system_svc.go:56] duration metric: took 10.852041ms WaitForService to wait for kubelet.
	I1206 18:20:25.677252  102554 kubeadm.go:581] duration metric: took 3.236719704s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1206 18:20:25.677287  102554 node_conditions.go:102] verifying NodePressure condition ...
	I1206 18:20:25.863731  102554 request.go:629] Waited for 186.362722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1206 18:20:25.863807  102554 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1206 18:20:25.863819  102554 round_trippers.go:469] Request Headers:
	I1206 18:20:25.863831  102554 round_trippers.go:473]     Accept: application/json, */*
	I1206 18:20:25.863842  102554 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1206 18:20:25.866262  102554 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1206 18:20:25.866284  102554 round_trippers.go:577] Response Headers:
	I1206 18:20:25.866291  102554 round_trippers.go:580]     Audit-Id: 76e62671-da0a-488f-93ca-a5af7bd8c249
	I1206 18:20:25.866296  102554 round_trippers.go:580]     Cache-Control: no-cache, private
	I1206 18:20:25.866302  102554 round_trippers.go:580]     Content-Type: application/json
	I1206 18:20:25.866307  102554 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 821d4ea4-a1e3-457d-96ba-cfec55c7a837
	I1206 18:20:25.866312  102554 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7cb31dcb-b18d-4293-8ae3-4b89c0ac3493
	I1206 18:20:25.866322  102554 round_trippers.go:580]     Date: Wed, 06 Dec 2023 18:20:25 GMT
	I1206 18:20:25.866544  102554 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"multinode-193731","uid":"483f1079-da7a-4d14-8cea-95a52ef69765","resourceVersion":"420","creationTimestamp":"2023-12-06T18:19:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-193731","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1ed075f134c3dd34466bae93fc5b34a7b7c859c3","minikube.k8s.io/name":"multinode-193731","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_06T18_19_22_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12720 chars]
	I1206 18:20:25.867222  102554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 18:20:25.867242  102554 node_conditions.go:123] node cpu capacity is 8
	I1206 18:20:25.867253  102554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 18:20:25.867258  102554 node_conditions.go:123] node cpu capacity is 8
	I1206 18:20:25.867263  102554 node_conditions.go:105] duration metric: took 189.969527ms to run NodePressure ...
	I1206 18:20:25.867275  102554 start.go:228] waiting for startup goroutines ...
	I1206 18:20:25.867306  102554 start.go:242] writing updated cluster config ...
	I1206 18:20:25.867644  102554 ssh_runner.go:195] Run: rm -f paused
	I1206 18:20:25.914738  102554 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1206 18:20:25.918036  102554 out.go:177] * Done! kubectl is now configured to use "multinode-193731" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 06 18:20:07 multinode-193731 crio[957]: time="2023-12-06 18:20:07.616320618Z" level=info msg="Created container 13cd81fb527411144596acc24d147557717cc511d836d552598e7485a8d1430a: kube-system/coredns-5dd5756b68-8t8qq/coredns" id=6bd37dca-10e3-4c39-914f-e1e71b3fa0d0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 18:20:07 multinode-193731 crio[957]: time="2023-12-06 18:20:07.616353674Z" level=info msg="Starting container: e1898dd860917ad7a3580f46b87653ec5f7468f105f8a0650fae11d388aae46a" id=064b3d95-0064-4a25-82c1-bcb8cf9e8e63 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 18:20:07 multinode-193731 crio[957]: time="2023-12-06 18:20:07.616749853Z" level=info msg="Starting container: 13cd81fb527411144596acc24d147557717cc511d836d552598e7485a8d1430a" id=98d43f5d-8230-4e0b-8e36-e1a26740bf9a name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 18:20:07 multinode-193731 crio[957]: time="2023-12-06 18:20:07.624834598Z" level=info msg="Started container" PID=2377 containerID=13cd81fb527411144596acc24d147557717cc511d836d552598e7485a8d1430a description=kube-system/coredns-5dd5756b68-8t8qq/coredns id=98d43f5d-8230-4e0b-8e36-e1a26740bf9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=e878beb97a319fcd67960e404ec139a21f72cd32370736202e7c7fd5eddc1a69
	Dec 06 18:20:07 multinode-193731 crio[957]: time="2023-12-06 18:20:07.625230791Z" level=info msg="Started container" PID=2370 containerID=e1898dd860917ad7a3580f46b87653ec5f7468f105f8a0650fae11d388aae46a description=kube-system/storage-provisioner/storage-provisioner id=064b3d95-0064-4a25-82c1-bcb8cf9e8e63 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c335cf581e01f35079d0a15701e1dd98b39d8d7da1c04a67c5e9cbacb0d97ad
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.244245260Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-5kkfq/POD" id=8476bd46-71df-4b0e-b8ae-8a8267530435 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.244351315Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.258772399Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5kkfq Namespace:default ID:7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede UID:9a0bc51e-cfe0-48f1-a305-9ef120e2faae NetNS:/var/run/netns/e1576668-2a65-4995-b07b-319e9b0a81e1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.258814915Z" level=info msg="Adding pod default_busybox-5bc68d56bd-5kkfq to CNI network \"kindnet\" (type=ptp)"
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.267500709Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5kkfq Namespace:default ID:7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede UID:9a0bc51e-cfe0-48f1-a305-9ef120e2faae NetNS:/var/run/netns/e1576668-2a65-4995-b07b-319e9b0a81e1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.267628630Z" level=info msg="Checking pod default_busybox-5bc68d56bd-5kkfq for CNI network kindnet (type=ptp)"
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.295019165Z" level=info msg="Ran pod sandbox 7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede with infra container: default/busybox-5bc68d56bd-5kkfq/POD" id=8476bd46-71df-4b0e-b8ae-8a8267530435 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.296217213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=46f1ef78-42ee-4ab5-a24b-96dec86d16ee name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.296444362Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=46f1ef78-42ee-4ab5-a24b-96dec86d16ee name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.297144268Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=94aab440-f6d3-4913-9f82-5067885aa900 name=/runtime.v1.ImageService/PullImage
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.298108859Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 06 18:20:27 multinode-193731 crio[957]: time="2023-12-06 18:20:27.546452964Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.007420829Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=94aab440-f6d3-4913-9f82-5067885aa900 name=/runtime.v1.ImageService/PullImage
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.008374540Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8f6bb5ad-56d3-43b3-8b23-18fb212525a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.008980336Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8f6bb5ad-56d3-43b3-8b23-18fb212525a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.009758077Z" level=info msg="Creating container: default/busybox-5bc68d56bd-5kkfq/busybox" id=0ae2cc47-b52a-4f63-b64b-f9e8bed40664 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.009859157Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.080881993Z" level=info msg="Created container 396dea6295cbf4dd454a835b075fd6fc0b283816b790eb7b84853512cb7804c6: default/busybox-5bc68d56bd-5kkfq/busybox" id=0ae2cc47-b52a-4f63-b64b-f9e8bed40664 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.081451516Z" level=info msg="Starting container: 396dea6295cbf4dd454a835b075fd6fc0b283816b790eb7b84853512cb7804c6" id=c9a1476d-93c6-4901-a074-7eee0d3d55cf name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 18:20:28 multinode-193731 crio[957]: time="2023-12-06 18:20:28.089030657Z" level=info msg="Started container" PID=2552 containerID=396dea6295cbf4dd454a835b075fd6fc0b283816b790eb7b84853512cb7804c6 description=default/busybox-5bc68d56bd-5kkfq/busybox id=c9a1476d-93c6-4901-a074-7eee0d3d55cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	396dea6295cbf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   7b15004c2051c       busybox-5bc68d56bd-5kkfq
	13cd81fb52741       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      24 seconds ago       Running             coredns                   0                   e878beb97a319       coredns-5dd5756b68-8t8qq
	e1898dd860917       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      24 seconds ago       Running             storage-provisioner       0                   0c335cf581e01       storage-provisioner
	de2950fa7e03b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      56 seconds ago       Running             kube-proxy                0                   49d7395d2ec8e       kube-proxy-tbznd
	89823a9566658       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      56 seconds ago       Running             kindnet-cni               0                   6193a2393da14       kindnet-8ldk5
	555c01b135043       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   aaac7fc1b620f       etcd-multinode-193731
	3080b8188463d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   464bf732d0aa3       kube-apiserver-multinode-193731
	56e7794df5127       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   5f89e2d65da97       kube-controller-manager-multinode-193731
	38f45085459dd       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   042a2e50d4e2a       kube-scheduler-multinode-193731
	
	* 
	* ==> coredns [13cd81fb527411144596acc24d147557717cc511d836d552598e7485a8d1430a] <==
	* [INFO] 10.244.0.3:37616 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113734s
	[INFO] 10.244.1.2:60152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112585s
	[INFO] 10.244.1.2:50361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001730038s
	[INFO] 10.244.1.2:58325 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086435s
	[INFO] 10.244.1.2:55759 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068663s
	[INFO] 10.244.1.2:44254 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001256072s
	[INFO] 10.244.1.2:50662 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078678s
	[INFO] 10.244.1.2:40322 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062774s
	[INFO] 10.244.1.2:42732 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048548s
	[INFO] 10.244.0.3:44853 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093401s
	[INFO] 10.244.0.3:52554 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081629s
	[INFO] 10.244.0.3:51852 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000052743s
	[INFO] 10.244.0.3:44002 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000044536s
	[INFO] 10.244.1.2:59560 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139022s
	[INFO] 10.244.1.2:45507 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103398s
	[INFO] 10.244.1.2:58661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065888s
	[INFO] 10.244.1.2:58502 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085697s
	[INFO] 10.244.0.3:50300 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098053s
	[INFO] 10.244.0.3:34520 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117431s
	[INFO] 10.244.0.3:47379 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107322s
	[INFO] 10.244.0.3:57885 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067154s
	[INFO] 10.244.1.2:39999 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150483s
	[INFO] 10.244.1.2:48933 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000076631s
	[INFO] 10.244.1.2:43520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061373s
	[INFO] 10.244.1.2:50962 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064605s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-193731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-193731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3
	                    minikube.k8s.io/name=multinode-193731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_06T18_19_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:19:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-193731
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:20:07 +0000   Wed, 06 Dec 2023 18:19:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:20:07 +0000   Wed, 06 Dec 2023 18:19:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:20:07 +0000   Wed, 06 Dec 2023 18:19:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:20:07 +0000   Wed, 06 Dec 2023 18:20:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-193731
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 30497331285e48f5a46421f4c31cc4d1
	  System UUID:                b7a2501a-cf13-4ecc-bf52-f1f09f24b94f
	  Boot ID:                    5f16510a-fcc2-4dea-8318-41aa6150c4de
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5kkfq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-8t8qq                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     56s
	  kube-system                 etcd-multinode-193731                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-8ldk5                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-193731             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-193731    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-tbznd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-193731             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node multinode-193731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node multinode-193731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x8 over 76s)  kubelet          Node multinode-193731 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node multinode-193731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node multinode-193731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s                kubelet          Node multinode-193731 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node multinode-193731 event: Registered Node multinode-193731 in Controller
	  Normal  NodeReady                25s                kubelet          Node multinode-193731 status is now: NodeReady
	
	
	Name:               multinode-193731-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-193731-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ed075f134c3dd34466bae93fc5b34a7b7c859c3
	                    minikube.k8s.io/name=multinode-193731
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_06T18_20_22_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Dec 2023 18:20:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-193731-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Dec 2023 18:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Dec 2023 18:20:24 +0000   Wed, 06 Dec 2023 18:20:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Dec 2023 18:20:24 +0000   Wed, 06 Dec 2023 18:20:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Dec 2023 18:20:24 +0000   Wed, 06 Dec 2023 18:20:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Dec 2023 18:20:24 +0000   Wed, 06 Dec 2023 18:20:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-193731-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 2054e13f89314711bc8b63e53aafa29b
	  System UUID:                0e4b9730-22db-44f4-96d6-d15913ab0c2a
	  Boot ID:                    5f16510a-fcc2-4dea-8318-41aa6150c4de
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-k9dh8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-rd8zf               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-cr5kr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 12s)  kubelet          Node multinode-193731-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 12s)  kubelet          Node multinode-193731-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 12s)  kubelet          Node multinode-193731-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8s                 kubelet          Node multinode-193731-m02 status is now: NodeReady
	  Normal  RegisteredNode           7s                 node-controller  Node multinode-193731-m02 event: Registered Node multinode-193731-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.004920] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006670] FS-Cache: N-cookie d=00000000349469f5{9p.inode} n=0000000016fce18f
	[  +0.008751] FS-Cache: N-key=[8] '0690130200000000'
	[  +2.633570] FS-Cache: Duplicate cookie detected
	[  +0.004703] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006809] FS-Cache: O-cookie d=00000000ccddd526{9P.session} n=000000005280aac1
	[  +0.007559] FS-Cache: O-key=[10] '34323935363638353031'
	[  +0.005393] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=00000000ccddd526{9P.session} n=00000000201f5d84
	[  +0.008902] FS-Cache: N-key=[10] '34323935363638353031'
	[  +5.075634] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 18:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +1.019846] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +2.015854] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +4.191730] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[  +8.191424] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[ +16.126837] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	[Dec 6 18:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: da 17 80 4f 5a cd 3e 04 ef 37 5b 60 08 00
	
	* 
	* ==> etcd [555c01b135043145022203942f677de4069344a8218d610177eb00c1b4454051] <==
	* {"level":"info","ts":"2023-12-06T18:19:16.932518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-06T18:19:16.932699Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-06T18:19:16.932833Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-06T18:19:16.932861Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-06T18:19:16.932908Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-06T18:19:16.93294Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-06T18:19:17.321732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-06T18:19:17.321783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-06T18:19:17.321825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-06T18:19:17.321846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-06T18:19:17.321853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-06T18:19:17.321864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-06T18:19:17.321873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-06T18:19:17.32299Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-193731 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-06T18:19:17.323009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T18:19:17.323063Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-06T18:19:17.323103Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T18:19:17.323258Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-06T18:19:17.323282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-06T18:19:17.323995Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T18:19:17.324079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T18:19:17.324111Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-06T18:19:17.324481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-06T18:19:17.324504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-06T18:20:13.813144Z","caller":"traceutil/trace.go:171","msg":"trace[887686784] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"134.468276ms","start":"2023-12-06T18:20:13.67866Z","end":"2023-12-06T18:20:13.813128Z","steps":["trace[887686784] 'process raft request'  (duration: 134.365965ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:20:32 up  1:03,  0 users,  load average: 0.92, 1.21, 0.91
	Linux multinode-193731 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [89823a9566658f5480529aa76c580ed3b3c33ba4ab20766d3a6ec5e0afd9f1f8] <==
	* I1206 18:19:36.606690       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1206 18:19:36.606746       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1206 18:19:36.606924       1 main.go:116] setting mtu 1500 for CNI 
	I1206 18:19:36.606945       1 main.go:146] kindnetd IP family: "ipv4"
	I1206 18:19:36.606964       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1206 18:20:06.833718       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1206 18:20:06.842259       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1206 18:20:06.842285       1 main.go:227] handling current node
	I1206 18:20:16.855943       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1206 18:20:16.855972       1 main.go:227] handling current node
	I1206 18:20:26.868841       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1206 18:20:26.868873       1 main.go:227] handling current node
	I1206 18:20:26.868883       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1206 18:20:26.868888       1 main.go:250] Node multinode-193731-m02 has CIDR [10.244.1.0/24] 
	I1206 18:20:26.869040       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [3080b8188463da5a9a4e3a8e0bc2b5a608dad9b0c569055363f7367594f6798b] <==
	* I1206 18:19:19.104681       1 aggregator.go:166] initial CRD sync complete...
	I1206 18:19:19.104710       1 autoregister_controller.go:141] Starting autoregister controller
	I1206 18:19:19.104737       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 18:19:19.104781       1 cache.go:39] Caches are synced for autoregister controller
	I1206 18:19:19.110890       1 shared_informer.go:318] Caches are synced for configmaps
	I1206 18:19:19.114318       1 controller.go:624] quota admission added evaluator for: namespaces
	I1206 18:19:19.120247       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 18:19:19.200384       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1206 18:19:19.200421       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1206 18:19:19.200723       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1206 18:19:19.965806       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1206 18:19:19.971510       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1206 18:19:19.971532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 18:19:20.344998       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 18:19:20.376739       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 18:19:20.410300       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1206 18:19:20.416021       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1206 18:19:20.417011       1 controller.go:624] quota admission added evaluator for: endpoints
	I1206 18:19:20.423063       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 18:19:21.125055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1206 18:19:21.709595       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1206 18:19:21.720541       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1206 18:19:21.728885       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1206 18:19:35.808351       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1206 18:19:36.010713       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [56e7794df51277b1dcbe1872edc7056c8ac9d81bbee2566933bd7c47870c224e] <==
	* I1206 18:20:07.222521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.776µs"
	I1206 18:20:07.969405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.444µs"
	I1206 18:20:07.995258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.852126ms"
	I1206 18:20:07.995430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.76µs"
	I1206 18:20:10.054787       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1206 18:20:21.751648       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-193731-m02\" does not exist"
	I1206 18:20:21.759012       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-193731-m02" podCIDRs=["10.244.1.0/24"]
	I1206 18:20:21.775187       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cr5kr"
	I1206 18:20:21.775218       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rd8zf"
	I1206 18:20:24.123913       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-193731-m02"
	I1206 18:20:25.056582       1 event.go:307] "Event occurred" object="multinode-193731-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-193731-m02 event: Registered Node multinode-193731-m02 in Controller"
	I1206 18:20:25.056631       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-193731-m02"
	I1206 18:20:26.621884       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1206 18:20:26.631183       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-k9dh8"
	I1206 18:20:26.635281       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5kkfq"
	I1206 18:20:26.639752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.043112ms"
	I1206 18:20:26.648658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.603348ms"
	I1206 18:20:26.648759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.295µs"
	I1206 18:20:26.649399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="83.813µs"
	I1206 18:20:26.650598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.139µs"
	I1206 18:20:26.654887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="102.904µs"
	I1206 18:20:28.288924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.074842ms"
	I1206 18:20:28.288999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.644µs"
	I1206 18:20:29.012457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.128788ms"
	I1206 18:20:29.012543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.941µs"
	
	* 
	* ==> kube-proxy [de2950fa7e03bcd16426cc422ec79ce639526d1d059cdb98b1d3be7be8120508] <==
	* I1206 18:19:36.601710       1 server_others.go:69] "Using iptables proxy"
	I1206 18:19:36.610136       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1206 18:19:36.628244       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 18:19:36.630293       1 server_others.go:152] "Using iptables Proxier"
	I1206 18:19:36.630333       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1206 18:19:36.630341       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1206 18:19:36.630380       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1206 18:19:36.630702       1 server.go:846] "Version info" version="v1.28.4"
	I1206 18:19:36.630720       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 18:19:36.631385       1 config.go:97] "Starting endpoint slice config controller"
	I1206 18:19:36.631430       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1206 18:19:36.631438       1 config.go:188] "Starting service config controller"
	I1206 18:19:36.631463       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1206 18:19:36.631468       1 config.go:315] "Starting node config controller"
	I1206 18:19:36.631474       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1206 18:19:36.732310       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1206 18:19:36.732308       1 shared_informer.go:318] Caches are synced for node config
	I1206 18:19:36.732340       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [38f45085459dddb1db793c56330e105d931c03806678e79dc33cad8e4dc6f9e9] <==
	* W1206 18:19:19.134894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 18:19:19.135004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 18:19:19.134906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:19:19.135029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 18:19:19.134906       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1206 18:19:19.135044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1206 18:19:19.135846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1206 18:19:19.135865       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1206 18:19:19.135922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1206 18:19:19.135927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1206 18:19:19.135943       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1206 18:19:19.135945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1206 18:19:19.135966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1206 18:19:19.135983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1206 18:19:19.136285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 18:19:19.136307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 18:19:19.940866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1206 18:19:19.940898       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1206 18:19:20.059470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1206 18:19:20.059511       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1206 18:19:20.135276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1206 18:19:20.135307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1206 18:19:20.242816       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1206 18:19:20.242853       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1206 18:19:22.330061       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009406    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2sx\" (UniqueName: \"kubernetes.io/projected/5400eb49-6ef8-4329-9b5a-799dceda044a-kube-api-access-ql2sx\") pod \"kube-proxy-tbznd\" (UID: \"5400eb49-6ef8-4329-9b5a-799dceda044a\") " pod="kube-system/kube-proxy-tbznd"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009440    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f5c0a719-e90e-4444-b144-e0b6f4d0db38-cni-cfg\") pod \"kindnet-8ldk5\" (UID: \"f5c0a719-e90e-4444-b144-e0b6f4d0db38\") " pod="kube-system/kindnet-8ldk5"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009480    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5400eb49-6ef8-4329-9b5a-799dceda044a-kube-proxy\") pod \"kube-proxy-tbznd\" (UID: \"5400eb49-6ef8-4329-9b5a-799dceda044a\") " pod="kube-system/kube-proxy-tbznd"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009511    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5400eb49-6ef8-4329-9b5a-799dceda044a-xtables-lock\") pod \"kube-proxy-tbznd\" (UID: \"5400eb49-6ef8-4329-9b5a-799dceda044a\") " pod="kube-system/kube-proxy-tbznd"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009541    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5c0a719-e90e-4444-b144-e0b6f4d0db38-lib-modules\") pod \"kindnet-8ldk5\" (UID: \"f5c0a719-e90e-4444-b144-e0b6f4d0db38\") " pod="kube-system/kindnet-8ldk5"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.009605    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5400eb49-6ef8-4329-9b5a-799dceda044a-lib-modules\") pod \"kube-proxy-tbznd\" (UID: \"5400eb49-6ef8-4329-9b5a-799dceda044a\") " pod="kube-system/kube-proxy-tbznd"
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: W1206 18:19:36.301505    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio-49d7395d2ec8e56e1463f643950ef1d63da1dfbe97e2b56eb25db5239aaafeca WatchSource:0}: Error finding container 49d7395d2ec8e56e1463f643950ef1d63da1dfbe97e2b56eb25db5239aaafeca: Status 404 returned error can't find the container with id 49d7395d2ec8e56e1463f643950ef1d63da1dfbe97e2b56eb25db5239aaafeca
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: W1206 18:19:36.301777    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio-6193a2393da14c182ec57d1ef5c12ada9b0a9eb0c92b6918e461dcccfdea8697 WatchSource:0}: Error finding container 6193a2393da14c182ec57d1ef5c12ada9b0a9eb0c92b6918e461dcccfdea8697: Status 404 returned error can't find the container with id 6193a2393da14c182ec57d1ef5c12ada9b0a9eb0c92b6918e461dcccfdea8697
	Dec 06 18:19:36 multinode-193731 kubelet[1600]: I1206 18:19:36.916990    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8ldk5" podStartSLOduration=1.916938711 podCreationTimestamp="2023-12-06 18:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 18:19:36.916640521 +0000 UTC m=+15.229177615" watchObservedRunningTime="2023-12-06 18:19:36.916938711 +0000 UTC m=+15.229475987"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.183785    1600 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.205015    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tbznd" podStartSLOduration=32.204960234 podCreationTimestamp="2023-12-06 18:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 18:19:36.92556594 +0000 UTC m=+15.238103031" watchObservedRunningTime="2023-12-06 18:20:07.204960234 +0000 UTC m=+45.517497325"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.205553    1600 topology_manager.go:215] "Topology Admit Handler" podUID="635b29b3-0829-4e31-b46f-8ae9b78c6bb2" podNamespace="kube-system" podName="storage-provisioner"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.206889    1600 topology_manager.go:215] "Topology Admit Handler" podUID="b3765e1c-caa3-48e6-b18b-d1eec4d40452" podNamespace="kube-system" podName="coredns-5dd5756b68-8t8qq"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.328940    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ds6t\" (UniqueName: \"kubernetes.io/projected/b3765e1c-caa3-48e6-b18b-d1eec4d40452-kube-api-access-8ds6t\") pod \"coredns-5dd5756b68-8t8qq\" (UID: \"b3765e1c-caa3-48e6-b18b-d1eec4d40452\") " pod="kube-system/coredns-5dd5756b68-8t8qq"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.328993    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgfc\" (UniqueName: \"kubernetes.io/projected/635b29b3-0829-4e31-b46f-8ae9b78c6bb2-kube-api-access-qpgfc\") pod \"storage-provisioner\" (UID: \"635b29b3-0829-4e31-b46f-8ae9b78c6bb2\") " pod="kube-system/storage-provisioner"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.329016    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3765e1c-caa3-48e6-b18b-d1eec4d40452-config-volume\") pod \"coredns-5dd5756b68-8t8qq\" (UID: \"b3765e1c-caa3-48e6-b18b-d1eec4d40452\") " pod="kube-system/coredns-5dd5756b68-8t8qq"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.329083    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/635b29b3-0829-4e31-b46f-8ae9b78c6bb2-tmp\") pod \"storage-provisioner\" (UID: \"635b29b3-0829-4e31-b46f-8ae9b78c6bb2\") " pod="kube-system/storage-provisioner"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: W1206 18:20:07.549118    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio-0c335cf581e01f35079d0a15701e1dd98b39d8d7da1c04a67c5e9cbacb0d97ad WatchSource:0}: Error finding container 0c335cf581e01f35079d0a15701e1dd98b39d8d7da1c04a67c5e9cbacb0d97ad: Status 404 returned error can't find the container with id 0c335cf581e01f35079d0a15701e1dd98b39d8d7da1c04a67c5e9cbacb0d97ad
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: W1206 18:20:07.549439    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio-e878beb97a319fcd67960e404ec139a21f72cd32370736202e7c7fd5eddc1a69 WatchSource:0}: Error finding container e878beb97a319fcd67960e404ec139a21f72cd32370736202e7c7fd5eddc1a69: Status 404 returned error can't find the container with id e878beb97a319fcd67960e404ec139a21f72cd32370736202e7c7fd5eddc1a69
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.969008    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8t8qq" podStartSLOduration=31.968960674 podCreationTimestamp="2023-12-06 18:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 18:20:07.968925768 +0000 UTC m=+46.281462858" watchObservedRunningTime="2023-12-06 18:20:07.968960674 +0000 UTC m=+46.281497775"
	Dec 06 18:20:07 multinode-193731 kubelet[1600]: I1206 18:20:07.978314    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.978274504 podCreationTimestamp="2023-12-06 18:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-06 18:20:07.978096444 +0000 UTC m=+46.290633535" watchObservedRunningTime="2023-12-06 18:20:07.978274504 +0000 UTC m=+46.290811594"
	Dec 06 18:20:26 multinode-193731 kubelet[1600]: I1206 18:20:26.642201    1600 topology_manager.go:215] "Topology Admit Handler" podUID="9a0bc51e-cfe0-48f1-a305-9ef120e2faae" podNamespace="default" podName="busybox-5bc68d56bd-5kkfq"
	Dec 06 18:20:26 multinode-193731 kubelet[1600]: I1206 18:20:26.837692    1600 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnnm\" (UniqueName: \"kubernetes.io/projected/9a0bc51e-cfe0-48f1-a305-9ef120e2faae-kube-api-access-wrnnm\") pod \"busybox-5bc68d56bd-5kkfq\" (UID: \"9a0bc51e-cfe0-48f1-a305-9ef120e2faae\") " pod="default/busybox-5bc68d56bd-5kkfq"
	Dec 06 18:20:27 multinode-193731 kubelet[1600]: W1206 18:20:27.293072    1600 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio-7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede WatchSource:0}: Error finding container 7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede: Status 404 returned error can't find the container with id 7b15004c2051c80a7a60761c3ea73689c94fd9624191b15702a2d8a665caaede
	Dec 06 18:20:29 multinode-193731 kubelet[1600]: I1206 18:20:29.008311    1600 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-5kkfq" podStartSLOduration=2.296997964 podCreationTimestamp="2023-12-06 18:20:26 +0000 UTC" firstStartedPulling="2023-12-06 18:20:27.296624525 +0000 UTC m=+65.609161598" lastFinishedPulling="2023-12-06 18:20:28.007866294 +0000 UTC m=+66.320403367" observedRunningTime="2023-12-06 18:20:29.008083221 +0000 UTC m=+67.320620342" watchObservedRunningTime="2023-12-06 18:20:29.008239733 +0000 UTC m=+67.320776826"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-193731 -n multinode-193731
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-193731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.18s)

                                                
                                    
x
+
TestPreload (29.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-426246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1206 18:25:54.390103   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-426246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: exit status 100 (24.931122396s)

                                                
                                                
-- stdout --
	* [test-preload-426246] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node test-preload-426246 in cluster test-preload-426246
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.24.4 on CRI-O 1.24.6 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:25:52.665126  134344 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:25:52.665415  134344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:25:52.665426  134344 out.go:309] Setting ErrFile to fd 2...
	I1206 18:25:52.665431  134344 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:25:52.665618  134344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:25:52.666237  134344 out.go:303] Setting JSON to false
	I1206 18:25:52.667620  134344 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4102,"bootTime":1701883051,"procs":592,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:25:52.667682  134344 start.go:138] virtualization: kvm guest
	I1206 18:25:52.670411  134344 out.go:177] * [test-preload-426246] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:25:52.672226  134344 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:25:52.672294  134344 notify.go:220] Checking for updates...
	I1206 18:25:52.673920  134344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:25:52.675918  134344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:25:52.677624  134344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:25:52.679137  134344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:25:52.680881  134344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:25:52.682873  134344 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:25:52.705251  134344 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:25:52.705364  134344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:25:52.757913  134344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-06 18:25:52.748659813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:25:52.758022  134344 docker.go:295] overlay module found
	I1206 18:25:52.760308  134344 out.go:177] * Using the docker driver based on user configuration
	I1206 18:25:52.762178  134344 start.go:298] selected driver: docker
	I1206 18:25:52.762190  134344 start.go:902] validating driver "docker" against <nil>
	I1206 18:25:52.762201  134344 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:25:52.762954  134344 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:25:52.816506  134344 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-06 18:25:52.807838784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:25:52.816672  134344 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:25:52.816949  134344 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 18:25:52.819173  134344 out.go:177] * Using Docker driver with root privileges
	I1206 18:25:52.821004  134344 cni.go:84] Creating CNI manager for ""
	I1206 18:25:52.821032  134344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:25:52.821049  134344 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 18:25:52.821063  134344 start_flags.go:323] config:
	{Name:test-preload-426246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-426246 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:25:52.822946  134344 out.go:177] * Starting control plane node test-preload-426246 in cluster test-preload-426246
	I1206 18:25:52.824606  134344 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:25:52.826407  134344 out.go:177] * Pulling base image ...
	I1206 18:25:52.828310  134344 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1206 18:25:52.828402  134344 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:25:52.828669  134344 cache.go:107] acquiring lock: {Name:mk8fc2f48817498753a97968b7f18716e6e1df7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828671  134344 cache.go:107] acquiring lock: {Name:mk60583be4130776257dfa204d1327e514695152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828737  134344 cache.go:107] acquiring lock: {Name:mkd4934c9e3b60d83f8703b5518984a3f6db875a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828796  134344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/test-preload-426246/config.json ...
	I1206 18:25:52.828832  134344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/test-preload-426246/config.json: {Name:mk28ac3d0cc434008e51fd0ebfb7b4e2f7067a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 18:25:52.828844  134344 cache.go:107] acquiring lock: {Name:mkf767e21441145eb28851b03a51e34b57c16823 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828902  134344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:25:52.828898  134344 cache.go:107] acquiring lock: {Name:mk1ab0f9574c19a4cae38b98f3698771d2e23400 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828847  134344 cache.go:107] acquiring lock: {Name:mkc44d20d5004f4db9216e21042a0e15c2ff3d71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828927  134344 cache.go:107] acquiring lock: {Name:mk31783f7944777f4ce62147264db23599470f44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.828958  134344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:25:52.828903  134344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:25:52.828985  134344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:25:52.829041  134344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1206 18:25:52.829058  134344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:25:52.829095  134344 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1206 18:25:52.828688  134344 cache.go:107] acquiring lock: {Name:mkec6c1288e64593303b92605a2e82ab935d755a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.829253  134344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:25:52.830221  134344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:25:52.830222  134344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:25:52.830226  134344 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1206 18:25:52.830222  134344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:25:52.830227  134344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:25:52.830224  134344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1206 18:25:52.830283  134344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:25:52.830317  134344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:25:52.848387  134344 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:25:52.848417  134344 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	I1206 18:25:52.848436  134344 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:25:52.848462  134344 start.go:365] acquiring machines lock for test-preload-426246: {Name:mkf06c0264da5391d33f3470e9cd6306b86eb887 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:25:52.848560  134344 start.go:369] acquired machines lock for "test-preload-426246" in 77.353µs
	I1206 18:25:52.848585  134344 start.go:93] Provisioning new machine with config: &{Name:test-preload-426246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-426246 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 18:25:52.848668  134344 start.go:125] createHost starting for "" (driver="docker")
	I1206 18:25:52.851332  134344 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1206 18:25:52.851574  134344 start.go:159] libmachine.API.Create for "test-preload-426246" (driver="docker")
	I1206 18:25:52.851604  134344 client.go:168] LocalClient.Create starting
	I1206 18:25:52.851661  134344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem
	I1206 18:25:52.851709  134344 main.go:141] libmachine: Decoding PEM data...
	I1206 18:25:52.851740  134344 main.go:141] libmachine: Parsing certificate...
	I1206 18:25:52.851801  134344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem
	I1206 18:25:52.851826  134344 main.go:141] libmachine: Decoding PEM data...
	I1206 18:25:52.851842  134344 main.go:141] libmachine: Parsing certificate...
	I1206 18:25:52.852199  134344 cli_runner.go:164] Run: docker network inspect test-preload-426246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 18:25:52.868648  134344 cli_runner.go:211] docker network inspect test-preload-426246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 18:25:52.868732  134344 network_create.go:281] running [docker network inspect test-preload-426246] to gather additional debugging logs...
	I1206 18:25:52.868754  134344 cli_runner.go:164] Run: docker network inspect test-preload-426246
	W1206 18:25:52.886733  134344 cli_runner.go:211] docker network inspect test-preload-426246 returned with exit code 1
	I1206 18:25:52.886767  134344 network_create.go:284] error running [docker network inspect test-preload-426246]: docker network inspect test-preload-426246: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network test-preload-426246 not found
	I1206 18:25:52.886781  134344 network_create.go:286] output of [docker network inspect test-preload-426246]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network test-preload-426246 not found
	
	** /stderr **
	I1206 18:25:52.886921  134344 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:25:52.904630  134344 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2ab48e65b3ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b5:68:ee:c7} reservation:<nil>}
	I1206 18:25:52.905260  134344 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9a05231ecf41 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:01:68:2b:87} reservation:<nil>}
	I1206 18:25:52.905945  134344 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002f98860}
	I1206 18:25:52.905970  134344 network_create.go:124] attempt to create docker network test-preload-426246 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1206 18:25:52.906034  134344 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-426246 test-preload-426246
	I1206 18:25:52.963411  134344 network_create.go:108] docker network test-preload-426246 192.168.67.0/24 created
	I1206 18:25:52.963443  134344 kic.go:121] calculated static IP "192.168.67.2" for the "test-preload-426246" container
	I1206 18:25:52.963493  134344 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 18:25:52.980483  134344 cli_runner.go:164] Run: docker volume create test-preload-426246 --label name.minikube.sigs.k8s.io=test-preload-426246 --label created_by.minikube.sigs.k8s.io=true
	I1206 18:25:52.989108  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1206 18:25:52.997881  134344 oci.go:103] Successfully created a docker volume test-preload-426246
	I1206 18:25:52.997972  134344 cli_runner.go:164] Run: docker run --rm --name test-preload-426246-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-426246 --entrypoint /usr/bin/test -v test-preload-426246:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f -d /var/lib
	I1206 18:25:53.011817  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1206 18:25:53.014583  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1206 18:25:53.027949  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1206 18:25:53.058493  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1206 18:25:53.070780  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1206 18:25:53.081368  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1206 18:25:53.114285  134344 cache.go:162] opening:  /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 18:25:53.137400  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 exists
	I1206 18:25:53.137427  134344 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7" took 308.61513ms
	I1206 18:25:53.137438  134344 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 succeeded
	I1206 18:25:53.235342  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 18:25:53.235373  134344 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 406.689767ms
	I1206 18:25:53.235389  134344 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 18:25:53.516754  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1206 18:25:53.516787  134344 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6" took 688.051212ms
	I1206 18:25:53.516804  134344 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1206 18:25:53.642229  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1206 18:25:53.642262  134344 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4" took 813.457264ms
	I1206 18:25:53.642282  134344 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1206 18:25:53.673431  134344 oci.go:107] Successfully prepared a docker volume test-preload-426246
	I1206 18:25:53.673466  134344 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	W1206 18:25:53.673614  134344 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 18:25:53.673707  134344 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 18:25:53.744717  134344 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-426246 --name test-preload-426246 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-426246 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-426246 --network test-preload-426246 --ip 192.168.67.2 --volume test-preload-426246:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:25:53.816005  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1206 18:25:53.816041  134344 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4" took 987.395375ms
	I1206 18:25:53.816074  134344 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1206 18:25:53.942513  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1206 18:25:53.942553  134344 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4" took 1.113888715s
	I1206 18:25:53.942571  134344 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1206 18:25:53.945532  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1206 18:25:53.945563  134344 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4" took 1.116637451s
	I1206 18:25:53.945579  134344 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1206 18:25:54.234970  134344 cli_runner.go:164] Run: docker container inspect test-preload-426246 --format={{.State.Running}}
	I1206 18:25:54.252938  134344 cli_runner.go:164] Run: docker container inspect test-preload-426246 --format={{.State.Status}}
	I1206 18:25:54.271256  134344 cli_runner.go:164] Run: docker exec test-preload-426246 stat /var/lib/dpkg/alternatives/iptables
	I1206 18:25:54.325701  134344 oci.go:144] the created container "test-preload-426246" has a running status.
	I1206 18:25:54.325739  134344 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa...
	I1206 18:25:54.435821  134344 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 18:25:54.457731  134344 cli_runner.go:164] Run: docker container inspect test-preload-426246 --format={{.State.Status}}
	I1206 18:25:54.474406  134344 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 18:25:54.474433  134344 kic_runner.go:114] Args: [docker exec --privileged test-preload-426246 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 18:25:54.539776  134344 cli_runner.go:164] Run: docker container inspect test-preload-426246 --format={{.State.Status}}
	I1206 18:25:54.557438  134344 machine.go:88] provisioning docker machine ...
	I1206 18:25:54.557482  134344 ubuntu.go:169] provisioning hostname "test-preload-426246"
	I1206 18:25:54.557548  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:54.577049  134344 main.go:141] libmachine: Using SSH client type: native
	I1206 18:25:54.577622  134344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I1206 18:25:54.577648  134344 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-426246 && echo "test-preload-426246" | sudo tee /etc/hostname
	I1206 18:25:54.578296  134344 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59706->127.0.0.1:32897: read: connection reset by peer
	I1206 18:25:54.840820  134344 cache.go:157] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 exists
	I1206 18:25:54.840850  134344 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0" took 2.011984973s
	I1206 18:25:54.840863  134344 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1206 18:25:54.840883  134344 cache.go:87] Successfully saved all images to host disk.
	I1206 18:25:57.710531  134344 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-426246
	
	I1206 18:25:57.710609  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:57.727400  134344 main.go:141] libmachine: Using SSH client type: native
	I1206 18:25:57.727723  134344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I1206 18:25:57.727741  134344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-426246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-426246/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-426246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:25:57.848570  134344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:25:57.848603  134344 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:25:57.848626  134344 ubuntu.go:177] setting up certificates
	I1206 18:25:57.848636  134344 provision.go:83] configureAuth start
	I1206 18:25:57.848687  134344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-426246
	I1206 18:25:57.865207  134344 provision.go:138] copyHostCerts
	I1206 18:25:57.865266  134344 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:25:57.865276  134344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:25:57.865339  134344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:25:57.865419  134344 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:25:57.865427  134344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:25:57.865450  134344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:25:57.865502  134344 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:25:57.865510  134344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:25:57.865529  134344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:25:57.865571  134344 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.test-preload-426246 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-426246]
	I1206 18:25:58.005008  134344 provision.go:172] copyRemoteCerts
	I1206 18:25:58.005082  134344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:25:58.005114  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.021990  134344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa Username:docker}
	I1206 18:25:58.116880  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 18:25:58.137709  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:25:58.158503  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1206 18:25:58.178765  134344 provision.go:86] duration metric: configureAuth took 330.116881ms
	I1206 18:25:58.178799  134344 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:25:58.178966  134344 config.go:182] Loaded profile config "test-preload-426246": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1206 18:25:58.179053  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.195264  134344 main.go:141] libmachine: Using SSH client type: native
	I1206 18:25:58.195572  134344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I1206 18:25:58.195590  134344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:25:58.399471  134344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:25:58.399502  134344 machine.go:91] provisioned docker machine in 3.842036397s
	I1206 18:25:58.399516  134344 client.go:171] LocalClient.Create took 5.547904709s
	I1206 18:25:58.399539  134344 start.go:167] duration metric: libmachine.API.Create for "test-preload-426246" took 5.547964897s
	I1206 18:25:58.399555  134344 start.go:300] post-start starting for "test-preload-426246" (driver="docker")
	I1206 18:25:58.399569  134344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:25:58.399651  134344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:25:58.399710  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.415701  134344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa Username:docker}
	I1206 18:25:58.508827  134344 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:25:58.511869  134344 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:25:58.511899  134344 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:25:58.511917  134344 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:25:58.511924  134344 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1206 18:25:58.511934  134344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:25:58.511986  134344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:25:58.512065  134344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:25:58.512149  134344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:25:58.519658  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:25:58.540532  134344 start.go:303] post-start completed in 140.964473ms
	I1206 18:25:58.540849  134344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-426246
	I1206 18:25:58.557256  134344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/test-preload-426246/config.json ...
	I1206 18:25:58.557500  134344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:25:58.557536  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.573495  134344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa Username:docker}
	I1206 18:25:58.656819  134344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:25:58.660808  134344 start.go:128] duration metric: createHost completed in 5.812126657s
	I1206 18:25:58.660831  134344 start.go:83] releasing machines lock for "test-preload-426246", held for 5.812259124s
	I1206 18:25:58.660883  134344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-426246
	I1206 18:25:58.676625  134344 ssh_runner.go:195] Run: cat /version.json
	I1206 18:25:58.676680  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.676715  134344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:25:58.676775  134344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-426246
	I1206 18:25:58.693768  134344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa Username:docker}
	I1206 18:25:58.693939  134344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/test-preload-426246/id_rsa Username:docker}
	I1206 18:25:58.775997  134344 ssh_runner.go:195] Run: systemctl --version
	I1206 18:25:58.858205  134344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:25:58.994061  134344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:25:58.998196  134344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:25:59.016765  134344 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:25:59.016852  134344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:25:59.043982  134344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1206 18:25:59.044009  134344 start.go:475] detecting cgroup driver to use...
	I1206 18:25:59.044038  134344 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:25:59.044100  134344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:25:59.057979  134344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:25:59.068153  134344 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:25:59.068213  134344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:25:59.080399  134344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:25:59.093363  134344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 18:25:59.167426  134344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:25:59.247280  134344 docker.go:219] disabling docker service ...
	I1206 18:25:59.247339  134344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:25:59.265532  134344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:25:59.276212  134344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:25:59.351057  134344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:25:59.435555  134344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:25:59.446263  134344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:25:59.461002  134344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1206 18:25:59.461068  134344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:25:59.470077  134344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 18:25:59.470145  134344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:25:59.479337  134344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:25:59.488402  134344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:25:59.497255  134344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 18:25:59.505522  134344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 18:25:59.512951  134344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 18:25:59.520417  134344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 18:25:59.602110  134344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 18:25:59.696068  134344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 18:25:59.696133  134344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 18:25:59.699558  134344 start.go:543] Will wait 60s for crictl version
	I1206 18:25:59.699613  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:25:59.702699  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 18:25:59.735361  134344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1206 18:25:59.735452  134344 ssh_runner.go:195] Run: crio --version
	I1206 18:25:59.769006  134344 ssh_runner.go:195] Run: crio --version
	I1206 18:25:59.804060  134344 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.6 ...
	I1206 18:25:59.805712  134344 cli_runner.go:164] Run: docker network inspect test-preload-426246 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 18:25:59.821722  134344 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1206 18:25:59.825198  134344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 18:25:59.835311  134344 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1206 18:25:59.835365  134344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 18:25:59.866898  134344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1206 18:25:59.866922  134344 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1206 18:25:59.866992  134344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:25:59.867002  134344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:25:59.867027  134344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:25:59.867047  134344 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1206 18:25:59.867076  134344 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:25:59.867150  134344 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1206 18:25:59.867150  134344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:25:59.867193  134344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:25:59.867946  134344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:25:59.868004  134344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:25:59.868191  134344 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1206 18:25:59.868193  134344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:25:59.868193  134344 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:25:59.868195  134344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:25:59.868197  134344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1206 18:25:59.868197  134344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:25:59.996096  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1206 18:26:00.031034  134344 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1206 18:26:00.031071  134344 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1206 18:26:00.031129  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.034281  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1206 18:26:00.034979  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:26:00.041098  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1206 18:26:00.041244  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:26:00.042146  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:26:00.047219  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:26:00.080619  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:26:00.109319  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1206 18:26:00.109418  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1206 18:26:00.125077  134344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1206 18:26:00.125155  134344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:26:00.125197  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.127063  134344 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1206 18:26:00.127128  134344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1206 18:26:00.127165  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.141861  134344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:26:00.203680  134344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1206 18:26:00.203725  134344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:26:00.203787  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.206512  134344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1206 18:26:00.206557  134344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:26:00.206605  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.208582  134344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1206 18:26:00.208620  134344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:26:00.208655  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.215013  134344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1206 18:26:00.215049  134344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:26:00.215090  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.215090  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1206 18:26:00.215121  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I1206 18:26:00.215155  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1206 18:26:00.215189  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1206 18:26:00.245749  134344 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1206 18:26:00.245812  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1206 18:26:00.304983  134344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1206 18:26:00.305041  134344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:26:00.305089  134344 ssh_runner.go:195] Run: which crictl
	I1206 18:26:00.305118  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1206 18:26:00.305213  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1206 18:26:00.305259  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1206 18:26:00.313220  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1206 18:26:00.313289  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1206 18:26:00.313362  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1206 18:26:00.317480  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1206 18:26:00.317613  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1206 18:26:00.518930  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1206 18:26:00.519068  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1206 18:26:00.519140  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1206 18:26:00.519152  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1206 18:26:00.519139  134344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 18:26:00.519198  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1206 18:26:00.519229  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1206 18:26:00.519252  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1206 18:26:00.519259  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1206 18:26:00.519305  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.24.4': No such file or directory
	I1206 18:26:00.519321  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1206 18:26:00.519323  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1206 18:26:00.519332  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 --> /var/lib/minikube/images/kube-controller-manager_v1.24.4 (31047168 bytes)
	I1206 18:26:00.519336  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I1206 18:26:00.559191  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.24.4': No such file or directory
	I1206 18:26:00.559206  134344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1206 18:26:00.559244  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 --> /var/lib/minikube/images/kube-apiserver_v1.24.4 (33814016 bytes)
	I1206 18:26:00.559288  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.24.4': No such file or directory
	I1206 18:26:00.559320  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 --> /var/lib/minikube/images/kube-proxy_v1.24.4 (39519744 bytes)
	I1206 18:26:00.559330  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1206 18:26:00.559342  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1206 18:26:00.559344  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I1206 18:26:00.559374  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.24.4': No such file or directory
	I1206 18:26:00.559397  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 --> /var/lib/minikube/images/kube-scheduler_v1.24.4 (15491584 bytes)
	I1206 18:26:00.671888  134344 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1206 18:26:00.671936  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1206 18:26:00.840349  134344 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1206 18:26:00.840421  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1206 18:26:01.947397  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (1.106947503s)
	I1206 18:26:01.947428  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1206 18:26:01.947452  134344 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1206 18:26:01.947495  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1206 18:26:02.484719  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1206 18:26:02.484757  134344 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1206 18:26:02.484809  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1206 18:26:03.523680  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.03884124s)
	I1206 18:26:03.523714  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1206 18:26:03.523748  134344 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1206 18:26:03.523809  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1206 18:26:05.161056  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.637220602s)
	I1206 18:26:05.161080  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1206 18:26:05.161106  134344 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1206 18:26:05.161152  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1206 18:26:06.999686  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.838509251s)
	I1206 18:26:06.999714  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1206 18:26:06.999737  134344 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1206 18:26:06.999783  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1206 18:26:09.140494  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.140685288s)
	I1206 18:26:09.140519  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1206 18:26:09.140543  134344 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1206 18:26:09.140578  134344 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1206 18:26:13.808160  134344 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (4.667551896s)
	I1206 18:26:13.808190  134344 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1206 18:26:13.808216  134344 cache_images.go:123] Successfully loaded all cached images
	I1206 18:26:13.808232  134344 cache_images.go:92] LoadImages completed in 13.94129815s
	I1206 18:26:13.808333  134344 ssh_runner.go:195] Run: crio config
	I1206 18:26:13.849515  134344 cni.go:84] Creating CNI manager for ""
	I1206 18:26:13.849543  134344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:26:13.849564  134344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1206 18:26:13.849591  134344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-426246 NodeName:test-preload-426246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 18:26:13.849735  134344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-426246"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 18:26:13.849803  134344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-426246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-426246 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1206 18:26:13.849853  134344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1206 18:26:13.858323  134344 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.24.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.24.4': No such file or directory
	
	Initiating transfer...
	I1206 18:26:13.858380  134344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.24.4
	I1206 18:26:13.866666  134344 download.go:107] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubectl
	I1206 18:26:13.866668  134344 download.go:107] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubelet
	I1206 18:26:13.866670  134344 download.go:107] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubeadm
	I1206 18:26:16.274417  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl
	I1206 18:26:16.278539  134344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubectl': No such file or directory
	I1206 18:26:16.278578  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubectl --> /var/lib/minikube/binaries/v1.24.4/kubectl (45715456 bytes)
	I1206 18:26:17.404628  134344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm
	I1206 18:26:17.409005  134344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubeadm': No such file or directory
	I1206 18:26:17.409044  134344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubeadm --> /var/lib/minikube/binaries/v1.24.4/kubeadm (44384256 bytes)
	I1206 18:26:17.528623  134344 out.go:177] 
	W1206 18:26:17.530336  134344 out.go:239] X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: updating control plane: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x462c340 0x462c340 0x462c340 0x462c340 0x462c340 0x462c340 0x462c340] Decompressors:map[bz2:0xc00057b5b0 gz:0xc00057b5b8 tar:0xc00057b560 tar.bz2:0xc00057b570 tar.gz:0xc00057b580 tar.xz:0xc00057b590 tar.zst:0xc00057b5a0 tbz2:0xc00057b570 tgz:0xc00057b580 txz:0xc00057b590 tzst:0xc00057b5a0 xz:0xc00057b5c0 zip:0xc00057b5d0 zst:0xc00057b5c8] Getters:map[file:0xc003925b10
http:0xc0029279f0 https:0xc002927a40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.150.0.14:48262->151.101.193.55:443: read: connection reset by peer
	X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: updating control plane: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.24.4/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x462c340 0x462c340 0x462c340 0x462c340 0x462c340 0x462c340 0x462c340] Decompressors:map[bz2:0xc00057b5b0 gz:0xc00057b5b8 tar:0xc00057b560 tar.bz2:0xc00057b570 tar.gz:0xc00057b580 tar.xz:0xc00057b590 tar.zst:0xc00057b5a0 tbz2:0xc00057b570 tgz:0xc00057b580 txz:0xc00057b590 tzst:0xc00057b5a0 xz:0xc00057b5c0 zip:0xc00057b5d0 zst:0xc00057b5c8] Getters:map[file:0xc003925b10 http:0xc0029279f0 https:0xc002927a40] Dir
:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.150.0.14:48262->151.101.193.55:443: read: connection reset by peer
	W1206 18:26:17.530361  134344 out.go:239] * 
	* 
	W1206 18:26:17.531193  134344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 18:26:17.533269  134344 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-linux-amd64 start -p test-preload-426246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4 failed: exit status 100
panic.go:523: *** TestPreload FAILED at 2023-12-06 18:26:17.550352533 +0000 UTC m=+1572.306976584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-426246
helpers_test.go:235: (dbg) docker inspect test-preload-426246:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49",
	        "Created": "2023-12-06T18:25:53.765343473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:25:54.22593356Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:75d04379c0692a7c7580bf47e8a90f896e08db4459e8feaaa815f73da348a8e2",
	        "ResolvConfPath": "/var/lib/docker/containers/54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49/hostname",
	        "HostsPath": "/var/lib/docker/containers/54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49/hosts",
	        "LogPath": "/var/lib/docker/containers/54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49/54702ac6d3bfa2a56ff777f404df281aa6c7a211fd99f1898dc42d1b7a665d49-json.log",
	        "Name": "/test-preload-426246",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-426246:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-426246",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/14aae0388bb6825310f0b684166ff329c2234e8d8a15bd8d71a5ab6fa8447156-init/diff:/var/lib/docker/overlay2/ec06e12da6157da3a94af2b1665e4c856c3ea27be6944a5fef4fd2886cc68e28/diff",
	                "MergedDir": "/var/lib/docker/overlay2/14aae0388bb6825310f0b684166ff329c2234e8d8a15bd8d71a5ab6fa8447156/merged",
	                "UpperDir": "/var/lib/docker/overlay2/14aae0388bb6825310f0b684166ff329c2234e8d8a15bd8d71a5ab6fa8447156/diff",
	                "WorkDir": "/var/lib/docker/overlay2/14aae0388bb6825310f0b684166ff329c2234e8d8a15bd8d71a5ab6fa8447156/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-426246",
	                "Source": "/var/lib/docker/volumes/test-preload-426246/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-426246",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-426246",
	                "name.minikube.sigs.k8s.io": "test-preload-426246",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "027712adc5a0ce694d8b2912ed98507c9eea595e630b058a291629d0c5918cb3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/027712adc5a0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-426246": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "54702ac6d3bf",
	                        "test-preload-426246"
	                    ],
	                    "NetworkID": "21639a66cbad394b0b62a0c814b762ba32c9adf527ec8d28eec7b3ceced7a4cf",
	                    "EndpointID": "c23ea9195d87bcd41e0db2e7af852735dbce56dca1929d8c46b547fd5f09b598",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-426246 -n test-preload-426246
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-426246 -n test-preload-426246: exit status 6 (282.935899ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 18:26:17.840031  136745 status.go:415] kubeconfig endpoint: extract IP: "test-preload-426246" does not appear in /home/jenkins/minikube-integration/17711-9529/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-426246" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-426246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-426246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-426246: (4.122489167s)
--- FAIL: TestPreload (29.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.3439633949.exe start -p running-upgrade-343610 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.3439633949.exe start -p running-upgrade-343610 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.758301025s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-343610 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-343610 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.104810414s)

                                                
                                                
-- stdout --
	* [running-upgrade-343610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-343610 in cluster running-upgrade-343610
	* Pulling base image ...
	* Updating the running docker "running-upgrade-343610" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:31:58.091955  196083 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:31:58.092230  196083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:31:58.092241  196083 out.go:309] Setting ErrFile to fd 2...
	I1206 18:31:58.092246  196083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:31:58.092464  196083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:31:58.093023  196083 out.go:303] Setting JSON to false
	I1206 18:31:58.094466  196083 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4467,"bootTime":1701883051,"procs":515,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:31:58.094530  196083 start.go:138] virtualization: kvm guest
	I1206 18:31:58.096917  196083 out.go:177] * [running-upgrade-343610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:31:58.098542  196083 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:31:58.098592  196083 notify.go:220] Checking for updates...
	I1206 18:31:58.100134  196083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:31:58.101683  196083 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:31:58.103289  196083 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:31:58.104866  196083 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:31:58.106255  196083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:31:58.108083  196083 config.go:182] Loaded profile config "running-upgrade-343610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1206 18:31:58.108104  196083 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:31:58.110313  196083 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1206 18:31:58.111661  196083 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:31:58.134304  196083 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:31:58.134443  196083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:31:58.189535  196083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2023-12-06 18:31:58.180490742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:31:58.189652  196083 docker.go:295] overlay module found
	I1206 18:31:58.192067  196083 out.go:177] * Using the docker driver based on existing profile
	I1206 18:31:58.193732  196083 start.go:298] selected driver: docker
	I1206 18:31:58.193750  196083 start.go:902] validating driver "docker" against &{Name:running-upgrade-343610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-343610 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 18:31:58.193841  196083 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:31:58.194668  196083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:31:58.247053  196083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2023-12-06 18:31:58.238368167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:31:58.247415  196083 cni.go:84] Creating CNI manager for ""
	I1206 18:31:58.247435  196083 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1206 18:31:58.247445  196083 start_flags.go:323] config:
	{Name:running-upgrade-343610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-343610 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1206 18:31:58.249900  196083 out.go:177] * Starting control plane node running-upgrade-343610 in cluster running-upgrade-343610
	I1206 18:31:58.251516  196083 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:31:58.253045  196083 out.go:177] * Pulling base image ...
	I1206 18:31:58.254475  196083 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1206 18:31:58.254506  196083 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:31:58.271041  196083 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:31:58.271067  196083 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	W1206 18:31:58.284415  196083 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1206 18:31:58.284568  196083 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/running-upgrade-343610/config.json ...
	I1206 18:31:58.284692  196083 cache.go:107] acquiring lock: {Name:mkec6c1288e64593303b92605a2e82ab935d755a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284692  196083 cache.go:107] acquiring lock: {Name:mk8b71da7a1c3dcc1d3a3e3502afaa5a842f7244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284706  196083 cache.go:107] acquiring lock: {Name:mk613ebd3c7636aec9b2b3192909ec2b851a1d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284828  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1206 18:31:58.284829  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1206 18:31:58.284846  196083 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 168.179µs
	I1206 18:31:58.284855  196083 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 155.207µs
	I1206 18:31:58.284845  196083 cache.go:107] acquiring lock: {Name:mke7f076d4cd2533426369adaef80348984441f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284870  196083 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1206 18:31:58.284873  196083 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1206 18:31:58.284881  196083 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:31:58.284900  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1206 18:31:58.284872  196083 cache.go:107] acquiring lock: {Name:mk0178b6a828b40249b97ed78d62f26fd40a55da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284910  196083 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 67.746µs
	I1206 18:31:58.284915  196083 start.go:365] acquiring machines lock for running-upgrade-343610: {Name:mkb1c0d6670dd03d1eb8bb1a71e4c89ff41883c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284919  196083 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1206 18:31:58.284913  196083 cache.go:107] acquiring lock: {Name:mkd9886d8d88b4e4c44912d34b7f756009aaf129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284913  196083 cache.go:107] acquiring lock: {Name:mk0e54776f18fff5fa144c38dd820871a6d063eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.284830  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 18:31:58.284973  196083 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 295.458µs
	I1206 18:31:58.284987  196083 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 18:31:58.284991  196083 start.go:369] acquired machines lock for "running-upgrade-343610" in 61.565µs
	I1206 18:31:58.285016  196083 start.go:96] Skipping create...Using existing machine configuration
	I1206 18:31:58.285026  196083 fix.go:54] fixHost starting: m01
	I1206 18:31:58.285057  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1206 18:31:58.284988  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1206 18:31:58.285089  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1206 18:31:58.285091  196083 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 213.972µs
	I1206 18:31:58.285100  196083 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 281.525µs
	I1206 18:31:58.285111  196083 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1206 18:31:58.285113  196083 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1206 18:31:58.285098  196083 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 223.411µs
	I1206 18:31:58.285120  196083 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1206 18:31:58.285208  196083 cache.go:107] acquiring lock: {Name:mk1aa4f2ac121be32db85c3dd7cbd835c2103e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:31:58.285278  196083 cli_runner.go:164] Run: docker container inspect running-upgrade-343610 --format={{.State.Status}}
	I1206 18:31:58.285289  196083 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1206 18:31:58.285296  196083 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 133.085µs
	I1206 18:31:58.285308  196083 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1206 18:31:58.285317  196083 cache.go:87] Successfully saved all images to host disk.
	I1206 18:31:58.301519  196083 fix.go:102] recreateIfNeeded on running-upgrade-343610: state=Running err=<nil>
	W1206 18:31:58.301563  196083 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 18:31:58.303864  196083 out.go:177] * Updating the running docker "running-upgrade-343610" container ...
	I1206 18:31:58.305360  196083 machine.go:88] provisioning docker machine ...
	I1206 18:31:58.305388  196083 ubuntu.go:169] provisioning hostname "running-upgrade-343610"
	I1206 18:31:58.305446  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:58.322881  196083 main.go:141] libmachine: Using SSH client type: native
	I1206 18:31:58.323281  196083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I1206 18:31:58.323299  196083 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-343610 && echo "running-upgrade-343610" | sudo tee /etc/hostname
	I1206 18:31:58.436430  196083 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-343610
	
	I1206 18:31:58.436519  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:58.455472  196083 main.go:141] libmachine: Using SSH client type: native
	I1206 18:31:58.456019  196083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I1206 18:31:58.456051  196083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-343610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-343610/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-343610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:31:58.564127  196083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:31:58.564154  196083 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:31:58.564189  196083 ubuntu.go:177] setting up certificates
	I1206 18:31:58.564204  196083 provision.go:83] configureAuth start
	I1206 18:31:58.564254  196083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-343610
	I1206 18:31:58.583646  196083 provision.go:138] copyHostCerts
	I1206 18:31:58.583726  196083 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:31:58.583743  196083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:31:58.583806  196083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:31:58.583944  196083 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:31:58.583956  196083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:31:58.583980  196083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:31:58.584054  196083 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:31:58.584061  196083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:31:58.584082  196083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:31:58.584141  196083 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-343610 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-343610]
	I1206 18:31:58.693057  196083 provision.go:172] copyRemoteCerts
	I1206 18:31:58.693128  196083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:31:58.693162  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:58.710444  196083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/running-upgrade-343610/id_rsa Username:docker}
	I1206 18:31:58.792256  196083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:31:58.811198  196083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 18:31:58.828828  196083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 18:31:58.846039  196083 provision.go:86] duration metric: configureAuth took 281.821592ms
	I1206 18:31:58.846069  196083 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:31:58.846282  196083 config.go:182] Loaded profile config "running-upgrade-343610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1206 18:31:58.846394  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:58.863914  196083 main.go:141] libmachine: Using SSH client type: native
	I1206 18:31:58.864429  196083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32970 <nil> <nil>}
	I1206 18:31:58.864461  196083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:31:59.309124  196083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:31:59.309153  196083 machine.go:91] provisioned docker machine in 1.003777094s
	I1206 18:31:59.309164  196083 start.go:300] post-start starting for "running-upgrade-343610" (driver="docker")
	I1206 18:31:59.309173  196083 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:31:59.309221  196083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:31:59.309306  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:59.328404  196083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/running-upgrade-343610/id_rsa Username:docker}
	I1206 18:31:59.415596  196083 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:31:59.418625  196083 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:31:59.418648  196083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:31:59.418656  196083 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:31:59.418662  196083 info.go:137] Remote host: Ubuntu 19.10
	I1206 18:31:59.418671  196083 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:31:59.418717  196083 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:31:59.418782  196083 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:31:59.418861  196083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:31:59.425579  196083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:31:59.442908  196083 start.go:303] post-start completed in 133.731381ms
	I1206 18:31:59.442996  196083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:31:59.443044  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:59.460330  196083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/running-upgrade-343610/id_rsa Username:docker}
	I1206 18:31:59.537003  196083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:31:59.541119  196083 fix.go:56] fixHost completed within 1.256088464s
	I1206 18:31:59.541144  196083 start.go:83] releasing machines lock for "running-upgrade-343610", held for 1.256136246s
	I1206 18:31:59.541225  196083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-343610
	I1206 18:31:59.557944  196083 ssh_runner.go:195] Run: cat /version.json
	I1206 18:31:59.557997  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:59.558049  196083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:31:59.558126  196083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-343610
	I1206 18:31:59.575293  196083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/running-upgrade-343610/id_rsa Username:docker}
	I1206 18:31:59.577648  196083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32970 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/running-upgrade-343610/id_rsa Username:docker}
	W1206 18:31:59.686315  196083 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1206 18:31:59.686395  196083 ssh_runner.go:195] Run: systemctl --version
	I1206 18:31:59.690285  196083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:31:59.743395  196083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:31:59.747603  196083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:31:59.763070  196083 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:31:59.763175  196083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:31:59.788545  196083 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 18:31:59.788574  196083 start.go:475] detecting cgroup driver to use...
	I1206 18:31:59.788617  196083 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:31:59.788666  196083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:31:59.812428  196083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:31:59.822208  196083 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:31:59.822262  196083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:31:59.831549  196083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:31:59.840430  196083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1206 18:31:59.849435  196083 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1206 18:31:59.849494  196083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:31:59.917192  196083 docker.go:219] disabling docker service ...
	I1206 18:31:59.917258  196083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:31:59.927223  196083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:31:59.937824  196083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:32:00.013419  196083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:32:00.095928  196083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:32:00.104999  196083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:32:00.117946  196083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1206 18:32:00.118011  196083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:32:00.128587  196083 out.go:177] 
	W1206 18:32:00.130286  196083 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1206 18:32:00.130306  196083 out.go:239] * 
	* 
	W1206 18:32:00.131187  196083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 18:32:00.133648  196083 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-343610 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-06 18:32:00.152656441 +0000 UTC m=+1914.909280484
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-343610
helpers_test.go:235: (dbg) docker inspect running-upgrade-343610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767",
	        "Created": "2023-12-06T18:30:56.605925906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-06T18:30:57.057055702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767/hosts",
	        "LogPath": "/var/lib/docker/containers/fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767/fc298ad5aa098c58eaaff55225186e87a3f8fe82fb74c107f69ea2213872b767-json.log",
	        "Name": "/running-upgrade-343610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-343610:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ad1bc835c54732b65d9e181b1f754a5353fee214c2c4ec3e59ba5cbd4350aa5b-init/diff:/var/lib/docker/overlay2/9cf30ea5d8d3418406caa1705441ae6a8ef64a7798da6e74d863b19c96e3eabc/diff:/var/lib/docker/overlay2/57281087ef27f0ebf1aa23c2509a18d1583b4ae7dfaab7c895d00a4a022259d8/diff:/var/lib/docker/overlay2/7c4b9261c76c5da29e26b6c5631f2e962542c5fc955d790cf0f07e88243ee8bf/diff:/var/lib/docker/overlay2/4d6039abf2d8e3c0a46f29426cc6def2e6817df2eabfed41d208184a07e0a7ee/diff:/var/lib/docker/overlay2/fd340d6b2ce91ebd54b110ff7de76456b9e899dd4a0dd75a4ad5087163435e74/diff:/var/lib/docker/overlay2/dbd611130fed96ffbdeff2dafcc07923b006e9f8d0923342e812e7a5c6340b62/diff:/var/lib/docker/overlay2/f97e7c9a6a7a84a4d71d0790dce97e5c0df9a7ea3612f7de80679526b78e62f5/diff:/var/lib/docker/overlay2/6cca21b08e6f79506e902a6639798c8bcbf9f54f9a2945bbb471e84bb88dc1d1/diff:/var/lib/docker/overlay2/0be842cffc7e8ca4f674ae722d66b3a9c33813e087d4c7a0d17735fce0f037d0/diff:/var/lib/docker/overlay2/b7ce23
fd24d3a858d62762ac1d8d7ee00af717127f1524fc677f4526fc24522f/diff:/var/lib/docker/overlay2/73b21cf94c163b5560da0bf2cc2875cf97a0202a0ad1db3fc1ada03140ebd038/diff:/var/lib/docker/overlay2/213a322bf2dab489bd546f81e84a9f5f75ccccd12f10252c9a68b41b4f28d908/diff:/var/lib/docker/overlay2/5aec6d3920f3634faa74b2660ed72a362599590b4cdcd3529830431fe4519c55/diff:/var/lib/docker/overlay2/021dbc9f8ed15b49555543f5468e867b529c816976c558f2e3e20b3e6df50399/diff:/var/lib/docker/overlay2/38643503e9eb374bc6d58a30ced5849cb58228e437e8b0a9a759e8995d3505e5/diff:/var/lib/docker/overlay2/3c1a0dfe137ecee70d6fa79bf1ba8947629a0906c7447e8063b49bdfe24443fe/diff:/var/lib/docker/overlay2/6bc2ae71177174287d019889dc2679560a09cb59439751d3ad12d7a8d73adab6/diff:/var/lib/docker/overlay2/c7e6ebe5807bd9aaceb0e0e63006e57577a7cde2607186d14e5173c0ce114148/diff:/var/lib/docker/overlay2/07b16bb8e9257c8c95936d6a2e0fcf0796f9a8b47dace28bc26083eeb1d7bef4/diff:/var/lib/docker/overlay2/3f7a4864985451a362e17ca852fe29db1e62d26a5a0103ca4cdd3d5d07e8e48d/diff:/var/lib/d
ocker/overlay2/0754fde4a2ef16e224b1a3c08d35a349ec7f39169b22cd3094a7839fda230b34/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad1bc835c54732b65d9e181b1f754a5353fee214c2c4ec3e59ba5cbd4350aa5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad1bc835c54732b65d9e181b1f754a5353fee214c2c4ec3e59ba5cbd4350aa5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad1bc835c54732b65d9e181b1f754a5353fee214c2c4ec3e59ba5cbd4350aa5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-343610",
	                "Source": "/var/lib/docker/volumes/running-upgrade-343610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-343610",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-343610",
	                "name.minikube.sigs.k8s.io": "running-upgrade-343610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "645646e0f7ba0d70d356f8b04ae9fe2dcabefa158b20bb2f310f4e3fd5931676",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32970"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/645646e0f7ba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "478dd0deb356375c9b4ba8b6bc9aa1b61baace78061d901366b655c0f60bf9e3",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "e850e73f0816226530200d738a589be34fedca83dba5bb6f4023e137f087dd35",
	                    "EndpointID": "478dd0deb356375c9b4ba8b6bc9aa1b61baace78061d901366b655c0f60bf9e3",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-343610 -n running-upgrade-343610
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-343610 -n running-upgrade-343610: exit status 4 (330.598039ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 18:32:00.468451  197033 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-343610" does not appear in /home/jenkins/minikube-integration/17711-9529/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-343610" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-343610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-343610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-343610: (1.90134416s)
--- FAIL: TestRunningBinaryUpgrade (66.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.877086404.exe start -p stopped-upgrade-444504 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.877086404.exe start -p stopped-upgrade-444504 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m26.21096784s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.877086404.exe -p stopped-upgrade-444504 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.877086404.exe -p stopped-upgrade-444504 stop: (2.862319029s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-444504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-444504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.909990667s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-444504] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-444504 in cluster stopped-upgrade-444504
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-444504" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:30:47.496660  183985 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:30:47.496987  183985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:30:47.497044  183985 out.go:309] Setting ErrFile to fd 2...
	I1206 18:30:47.497061  183985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:30:47.497433  183985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:30:47.498289  183985 out.go:303] Setting JSON to false
	I1206 18:30:47.499758  183985 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4396,"bootTime":1701883051,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:30:47.499829  183985 start.go:138] virtualization: kvm guest
	I1206 18:30:47.505647  183985 out.go:177] * [stopped-upgrade-444504] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:30:47.507811  183985 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:30:47.507821  183985 notify.go:220] Checking for updates...
	I1206 18:30:47.509524  183985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:30:47.511987  183985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:30:47.513573  183985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:30:47.515044  183985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:30:47.516537  183985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:30:47.518391  183985 config.go:182] Loaded profile config "stopped-upgrade-444504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1206 18:30:47.518431  183985 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f
	I1206 18:30:47.520705  183985 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1206 18:30:47.522040  183985 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:30:47.555034  183985 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:30:47.555168  183985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:30:47.626771  183985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-06 18:30:47.611488121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:30:47.626874  183985 docker.go:295] overlay module found
	I1206 18:30:47.629029  183985 out.go:177] * Using the docker driver based on existing profile
	I1206 18:30:47.630741  183985 start.go:298] selected driver: docker
	I1206 18:30:47.630769  183985 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-444504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-444504 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1206 18:30:47.630885  183985 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:30:47.632005  183985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:30:47.700231  183985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-06 18:30:47.691015253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:30:47.700643  183985 cni.go:84] Creating CNI manager for ""
	I1206 18:30:47.700665  183985 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1206 18:30:47.700711  183985 start_flags.go:323] config:
	{Name:stopped-upgrade-444504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-444504 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1206 18:30:47.703395  183985 out.go:177] * Starting control plane node stopped-upgrade-444504 in cluster stopped-upgrade-444504
	I1206 18:30:47.705051  183985 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:30:47.706595  183985 out.go:177] * Pulling base image ...
	I1206 18:30:47.708005  183985 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1206 18:30:47.708045  183985 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	W1206 18:30:47.731796  183985 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1206 18:30:47.731999  183985 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/stopped-upgrade-444504/config.json ...
	I1206 18:30:47.732042  183985 cache.go:107] acquiring lock: {Name:mkec6c1288e64593303b92605a2e82ab935d755a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.732159  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1206 18:30:47.732174  183985 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 145.58µs
	I1206 18:30:47.732195  183985 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1206 18:30:47.732210  183985 cache.go:107] acquiring lock: {Name:mkd9886d8d88b4e4c44912d34b7f756009aaf129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.732299  183985 cache.go:107] acquiring lock: {Name:mke7f076d4cd2533426369adaef80348984441f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.732350  183985 cache.go:107] acquiring lock: {Name:mk0e54776f18fff5fa144c38dd820871a6d063eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.732381  183985 cache.go:107] acquiring lock: {Name:mk8b71da7a1c3dcc1d3a3e3502afaa5a842f7244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.733932  183985 cache.go:107] acquiring lock: {Name:mk1aa4f2ac121be32db85c3dd7cbd835c2103e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.734161  183985 cache.go:107] acquiring lock: {Name:mk0178b6a828b40249b97ed78d62f26fd40a55da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.734259  183985 cache.go:107] acquiring lock: {Name:mk613ebd3c7636aec9b2b3192909ec2b851a1d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.734304  183985 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon, skipping pull
	I1206 18:30:47.734319  183985 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in daemon, skipping load
	I1206 18:30:47.734344  183985 cache.go:194] Successfully downloaded all kic artifacts
	I1206 18:30:47.734372  183985 start.go:365] acquiring machines lock for stopped-upgrade-444504: {Name:mkf600426a87bed6191008b5b581b520d4183578 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 18:30:47.734440  183985 start.go:369] acquired machines lock for "stopped-upgrade-444504" in 57.216µs
	I1206 18:30:47.734464  183985 start.go:96] Skipping create...Using existing machine configuration
	I1206 18:30:47.734473  183985 fix.go:54] fixHost starting: m01
	I1206 18:30:47.734764  183985 cli_runner.go:164] Run: docker container inspect stopped-upgrade-444504 --format={{.State.Status}}
	I1206 18:30:47.757775  183985 fix.go:102] recreateIfNeeded on stopped-upgrade-444504: state=Stopped err=<nil>
	W1206 18:30:47.757824  183985 fix.go:128] unexpected machine state, will restart: <nil>
	I1206 18:30:47.763914  183985 out.go:177] * Restarting existing docker container for "stopped-upgrade-444504" ...
	I1206 18:30:47.765590  183985 cli_runner.go:164] Run: docker start stopped-upgrade-444504
	I1206 18:30:48.044298  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1206 18:30:48.044335  183985 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 310.106109ms
	I1206 18:30:48.044369  183985 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1206 18:30:48.064006  183985 cli_runner.go:164] Run: docker container inspect stopped-upgrade-444504 --format={{.State.Status}}
	I1206 18:30:48.086018  183985 kic.go:430] container "stopped-upgrade-444504" state is running.
	I1206 18:30:48.086508  183985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-444504
	I1206 18:30:48.107629  183985 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/stopped-upgrade-444504/config.json ...
	I1206 18:30:48.107876  183985 machine.go:88] provisioning docker machine ...
	I1206 18:30:48.107893  183985 ubuntu.go:169] provisioning hostname "stopped-upgrade-444504"
	I1206 18:30:48.107945  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:48.138271  183985 main.go:141] libmachine: Using SSH client type: native
	I1206 18:30:48.138836  183985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I1206 18:30:48.138854  183985 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-444504 && echo "stopped-upgrade-444504" | sudo tee /etc/hostname
	I1206 18:30:48.139516  183985 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45332->127.0.0.1:32967: read: connection reset by peer
	I1206 18:30:48.398593  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1206 18:30:48.398622  183985 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 664.726637ms
	I1206 18:30:48.398640  183985 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1206 18:30:48.932538  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1206 18:30:48.932579  183985 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.198437639s
	I1206 18:30:48.932596  183985 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1206 18:30:49.017381  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1206 18:30:49.017412  183985 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.285042734s
	I1206 18:30:49.017429  183985 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1206 18:30:49.063151  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1206 18:30:49.063180  183985 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.330970817s
	I1206 18:30:49.063198  183985 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1206 18:30:49.627220  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1206 18:30:49.627247  183985 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.894901433s
	I1206 18:30:49.627265  183985 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1206 18:30:49.697904  183985 cache.go:115] /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1206 18:30:49.697932  183985 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.965666075s
	I1206 18:30:49.697950  183985 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1206 18:30:49.697967  183985 cache.go:87] Successfully saved all images to host disk.
	I1206 18:30:51.252441  183985 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-444504
	
	I1206 18:30:51.252528  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:51.268621  183985 main.go:141] libmachine: Using SSH client type: native
	I1206 18:30:51.268959  183985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I1206 18:30:51.268979  183985 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-444504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-444504/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-444504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 18:30:51.372182  183985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1206 18:30:51.372221  183985 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17711-9529/.minikube CaCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17711-9529/.minikube}
	I1206 18:30:51.372286  183985 ubuntu.go:177] setting up certificates
	I1206 18:30:51.372311  183985 provision.go:83] configureAuth start
	I1206 18:30:51.372380  183985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-444504
	I1206 18:30:51.389695  183985 provision.go:138] copyHostCerts
	I1206 18:30:51.389758  183985 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem, removing ...
	I1206 18:30:51.389766  183985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem
	I1206 18:30:51.389831  183985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/ca.pem (1078 bytes)
	I1206 18:30:51.389968  183985 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem, removing ...
	I1206 18:30:51.389993  183985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem
	I1206 18:30:51.390022  183985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/cert.pem (1123 bytes)
	I1206 18:30:51.390095  183985 exec_runner.go:144] found /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem, removing ...
	I1206 18:30:51.390103  183985 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem
	I1206 18:30:51.390123  183985 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17711-9529/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17711-9529/.minikube/key.pem (1675 bytes)
	I1206 18:30:51.390178  183985 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-444504 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-444504]
	I1206 18:30:51.736705  183985 provision.go:172] copyRemoteCerts
	I1206 18:30:51.736778  183985 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 18:30:51.736819  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:51.753500  183985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/stopped-upgrade-444504/id_rsa Username:docker}
	I1206 18:30:51.836035  183985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1206 18:30:51.853756  183985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 18:30:51.870952  183985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 18:30:51.888412  183985 provision.go:86] duration metric: configureAuth took 516.083735ms
	I1206 18:30:51.888446  183985 ubuntu.go:193] setting minikube options for container-runtime
	I1206 18:30:51.888653  183985 config.go:182] Loaded profile config "stopped-upgrade-444504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1206 18:30:51.888771  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:51.906682  183985 main.go:141] libmachine: Using SSH client type: native
	I1206 18:30:51.907007  183985 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 32967 <nil> <nil>}
	I1206 18:30:51.907029  183985 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 18:30:52.559001  183985 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 18:30:52.559031  183985 machine.go:91] provisioned docker machine in 4.45114408s
	I1206 18:30:52.559045  183985 start.go:300] post-start starting for "stopped-upgrade-444504" (driver="docker")
	I1206 18:30:52.559059  183985 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 18:30:52.559141  183985 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 18:30:52.559189  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:52.577469  183985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/stopped-upgrade-444504/id_rsa Username:docker}
	I1206 18:30:52.656001  183985 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 18:30:52.658955  183985 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1206 18:30:52.658976  183985 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 18:30:52.658984  183985 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1206 18:30:52.658989  183985 info.go:137] Remote host: Ubuntu 19.10
	I1206 18:30:52.659005  183985 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/addons for local assets ...
	I1206 18:30:52.659053  183985 filesync.go:126] Scanning /home/jenkins/minikube-integration/17711-9529/.minikube/files for local assets ...
	I1206 18:30:52.659115  183985 filesync.go:149] local asset: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem -> 163462.pem in /etc/ssl/certs
	I1206 18:30:52.659196  183985 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 18:30:52.665799  183985 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/ssl/certs/163462.pem --> /etc/ssl/certs/163462.pem (1708 bytes)
	I1206 18:30:52.682737  183985 start.go:303] post-start completed in 123.674304ms
	I1206 18:30:52.682821  183985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:30:52.682874  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:52.699352  183985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/stopped-upgrade-444504/id_rsa Username:docker}
	I1206 18:30:52.776882  183985 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 18:30:52.780622  183985 fix.go:56] fixHost completed within 5.046147017s
	I1206 18:30:52.780651  183985 start.go:83] releasing machines lock for "stopped-upgrade-444504", held for 5.046194926s
	I1206 18:30:52.780715  183985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-444504
	I1206 18:30:52.796438  183985 ssh_runner.go:195] Run: cat /version.json
	I1206 18:30:52.796483  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:52.796539  183985 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 18:30:52.796607  183985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-444504
	I1206 18:30:52.813700  183985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/stopped-upgrade-444504/id_rsa Username:docker}
	I1206 18:30:52.814299  183985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32967 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/stopped-upgrade-444504/id_rsa Username:docker}
	W1206 18:30:52.917575  183985 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1206 18:30:52.917655  183985 ssh_runner.go:195] Run: systemctl --version
	I1206 18:30:52.921559  183985 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 18:30:52.969290  183985 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 18:30:52.973484  183985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:30:52.989038  183985 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1206 18:30:52.989125  183985 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 18:30:53.013869  183985 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 18:30:53.013893  183985 start.go:475] detecting cgroup driver to use...
	I1206 18:30:53.013923  183985 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1206 18:30:53.013974  183985 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 18:30:53.033300  183985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 18:30:53.042088  183985 docker.go:203] disabling cri-docker service (if available) ...
	I1206 18:30:53.042145  183985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 18:30:53.051005  183985 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 18:30:53.059565  183985 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1206 18:30:53.068193  183985 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1206 18:30:53.068254  183985 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 18:30:53.139365  183985 docker.go:219] disabling docker service ...
	I1206 18:30:53.139445  183985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 18:30:53.149837  183985 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 18:30:53.159247  183985 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 18:30:53.218367  183985 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 18:30:53.291045  183985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 18:30:53.299933  183985 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 18:30:53.312199  183985 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1206 18:30:53.312254  183985 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 18:30:53.322813  183985 out.go:177] 
	W1206 18:30:53.324539  183985 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1206 18:30:53.324562  183985 out.go:239] * 
	* 
	W1206 18:30:53.325395  183985 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 18:30:53.327570  183985 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-444504 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (94.99s)

                                                
                                    

Test pass (282/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 5.34
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.1/json-events 8.22
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.21
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.28
26 TestBinaryMirror 0.74
27 TestOffline 88.73
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 132.84
34 TestAddons/parallel/Registry 14.45
36 TestAddons/parallel/InspektorGadget 10.68
37 TestAddons/parallel/MetricsServer 5.72
38 TestAddons/parallel/HelmTiller 9.9
40 TestAddons/parallel/CSI 83.56
41 TestAddons/parallel/Headlamp 10.97
42 TestAddons/parallel/CloudSpanner 5.67
43 TestAddons/parallel/LocalPath 8.54
44 TestAddons/parallel/NvidiaDevicePlugin 5.54
47 TestAddons/serial/GCPAuth/Namespaces 0.12
48 TestAddons/StoppedEnableDisable 12.19
49 TestCertOptions 27.57
50 TestCertExpiration 225.69
52 TestForceSystemdFlag 39.13
53 TestForceSystemdEnv 38.93
55 TestKVMDriverInstallOrUpdate 1.48
59 TestErrorSpam/setup 20.43
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.51
63 TestErrorSpam/unpause 1.49
64 TestErrorSpam/stop 1.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 66.87
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.14
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
76 TestFunctional/serial/CacheCmd/cache/add_local 0.79
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 29.47
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.38
87 TestFunctional/serial/LogsFileCmd 1.39
88 TestFunctional/serial/InvalidService 4.62
90 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DashboardCmd 8.7
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.1
98 TestFunctional/parallel/ServiceCmdConnect 13.54
99 TestFunctional/parallel/AddonsCmd 0.21
100 TestFunctional/parallel/PersistentVolumeClaim 26.45
102 TestFunctional/parallel/SSHCmd 0.54
103 TestFunctional/parallel/CpCmd 1.1
104 TestFunctional/parallel/MySQL 23.51
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.75
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.19
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.13
124 TestFunctional/parallel/ImageCommands/Setup 1.03
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.25
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.02
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ServiceCmd/DeployApp 6.19
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.05
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
139 TestFunctional/parallel/ProfileCmd/profile_list 0.38
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
141 TestFunctional/parallel/MountCmd/any-port 6.15
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
144 TestFunctional/parallel/ServiceCmd/List 0.52
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
148 TestFunctional/parallel/ServiceCmd/Format 0.39
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.16
150 TestFunctional/parallel/ServiceCmd/URL 0.36
151 TestFunctional/parallel/MountCmd/specific-port 1.73
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 62.42
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.34
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
169 TestJSONOutput/start/Command 66.76
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.66
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.6
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.74
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.22
194 TestKicCustomNetwork/create_custom_network 34.06
195 TestKicCustomNetwork/use_default_bridge_network 23.6
196 TestKicExistingNetwork 24.24
197 TestKicCustomSubnet 26.81
198 TestKicStaticIP 27.14
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 54.45
203 TestMountStart/serial/StartWithMountFirst 8.06
204 TestMountStart/serial/VerifyMountFirst 0.25
205 TestMountStart/serial/StartWithMountSecond 8.23
206 TestMountStart/serial/VerifyMountSecond 0.26
207 TestMountStart/serial/DeleteFirst 1.63
208 TestMountStart/serial/VerifyMountPostDelete 0.26
209 TestMountStart/serial/Stop 1.2
210 TestMountStart/serial/RestartStopped 6.87
211 TestMountStart/serial/VerifyMountPostStop 0.26
214 TestMultiNode/serial/FreshStart2Nodes 87.2
215 TestMultiNode/serial/DeployApp2Nodes 4.09
217 TestMultiNode/serial/AddNode 45.93
218 TestMultiNode/serial/MultiNodeLabels 0.06
219 TestMultiNode/serial/ProfileList 0.28
220 TestMultiNode/serial/CopyFile 9.21
221 TestMultiNode/serial/StopNode 2.13
222 TestMultiNode/serial/StartAfterStop 10.7
223 TestMultiNode/serial/RestartKeepsNodes 116.58
224 TestMultiNode/serial/DeleteNode 4.67
225 TestMultiNode/serial/StopMultiNode 23.82
226 TestMultiNode/serial/RestartMultiNode 73.45
227 TestMultiNode/serial/ValidateNameConflict 26.44
234 TestScheduledStopUnix 97.48
237 TestInsufficientStorage 10.28
240 TestKubernetesUpgrade 354.07
241 TestMissingContainerUpgrade 142.67
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
244 TestNoKubernetes/serial/StartWithK8s 35.9
245 TestNoKubernetes/serial/StartWithStopK8s 10.6
246 TestNoKubernetes/serial/Start 5.07
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
248 TestNoKubernetes/serial/ProfileList 1.44
249 TestNoKubernetes/serial/Stop 1.23
250 TestNoKubernetes/serial/StartNoArgs 6.58
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
259 TestNetworkPlugins/group/false 4.05
260 TestStoppedBinaryUpgrade/Setup 0.43
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.54
274 TestPause/serial/Start 47.1
275 TestNetworkPlugins/group/auto/Start 71.89
276 TestNetworkPlugins/group/kindnet/Start 66.37
277 TestPause/serial/SecondStartNoReconfiguration 30.05
278 TestPause/serial/Pause 0.76
279 TestPause/serial/VerifyStatus 0.36
280 TestPause/serial/Unpause 0.68
281 TestPause/serial/PauseAgain 0.8
282 TestPause/serial/DeletePaused 2.69
283 TestPause/serial/VerifyDeletedResources 0.62
284 TestNetworkPlugins/group/calico/Start 64.66
285 TestNetworkPlugins/group/auto/KubeletFlags 0.29
286 TestNetworkPlugins/group/auto/NetCatPod 10.3
287 TestNetworkPlugins/group/auto/DNS 0.17
288 TestNetworkPlugins/group/auto/Localhost 0.16
289 TestNetworkPlugins/group/auto/HairPin 0.15
290 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
291 TestNetworkPlugins/group/custom-flannel/Start 55.62
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
293 TestNetworkPlugins/group/kindnet/NetCatPod 13.36
294 TestNetworkPlugins/group/kindnet/DNS 0.17
295 TestNetworkPlugins/group/kindnet/Localhost 0.13
296 TestNetworkPlugins/group/kindnet/HairPin 0.15
297 TestNetworkPlugins/group/calico/ControllerPod 5.02
298 TestNetworkPlugins/group/enable-default-cni/Start 39.49
299 TestNetworkPlugins/group/calico/KubeletFlags 0.33
300 TestNetworkPlugins/group/calico/NetCatPod 13.37
301 TestNetworkPlugins/group/calico/DNS 0.18
302 TestNetworkPlugins/group/calico/Localhost 0.16
303 TestNetworkPlugins/group/calico/HairPin 0.14
304 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
305 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
306 TestNetworkPlugins/group/custom-flannel/DNS 0.18
307 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
308 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
309 TestNetworkPlugins/group/flannel/Start 62.16
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
312 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
313 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
314 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
315 TestNetworkPlugins/group/bridge/Start 41.54
317 TestStartStop/group/old-k8s-version/serial/FirstStart 108.39
319 TestStartStop/group/no-preload/serial/FirstStart 81.58
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
321 TestNetworkPlugins/group/bridge/NetCatPod 9.37
322 TestNetworkPlugins/group/flannel/ControllerPod 5.02
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
324 TestNetworkPlugins/group/flannel/NetCatPod 9.3
325 TestNetworkPlugins/group/bridge/DNS 0.19
326 TestNetworkPlugins/group/bridge/Localhost 0.14
327 TestNetworkPlugins/group/bridge/HairPin 0.16
328 TestNetworkPlugins/group/flannel/DNS 0.16
329 TestNetworkPlugins/group/flannel/Localhost 0.14
330 TestNetworkPlugins/group/flannel/HairPin 0.15
332 TestStartStop/group/embed-certs/serial/FirstStart 45.1
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.49
335 TestStartStop/group/no-preload/serial/DeployApp 9.86
336 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
337 TestStartStop/group/no-preload/serial/Stop 12.03
338 TestStartStop/group/embed-certs/serial/DeployApp 8.33
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/no-preload/serial/SecondStart 335.66
341 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
343 TestStartStop/group/embed-certs/serial/Stop 11.96
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
345 TestStartStop/group/old-k8s-version/serial/Stop 12.1
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
347 TestStartStop/group/embed-certs/serial/SecondStart 335.66
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
349 TestStartStop/group/old-k8s-version/serial/SecondStart 433.92
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.38
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.18
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.81
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
356 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
358 TestStartStop/group/no-preload/serial/Pause 3.48
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.02
361 TestStartStop/group/newest-cni/serial/FirstStart 35.22
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
364 TestStartStop/group/embed-certs/serial/Pause 3.27
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.02
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
368 TestStartStop/group/newest-cni/serial/Stop 3.09
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
370 TestStartStop/group/newest-cni/serial/SecondStart 25.11
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
377 TestStartStop/group/newest-cni/serial/Pause 2.43
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/old-k8s-version/serial/Pause 2.58
x
+
TestDownloadOnly/v1.16.0/json-events (9.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.98615148s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480808
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480808: exit status 85 (72.463987ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:00:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:00:05.342170   16358 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:00:05.342421   16358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:05.342431   16358 out.go:309] Setting ErrFile to fd 2...
	I1206 18:00:05.342435   16358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:05.342630   16358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	W1206 18:00:05.342737   16358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: open /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: no such file or directory
	I1206 18:00:05.343325   16358 out.go:303] Setting JSON to true
	I1206 18:00:05.344172   16358 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2554,"bootTime":1701883051,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:00:05.344233   16358 start.go:138] virtualization: kvm guest
	I1206 18:00:05.347154   16358 out.go:97] [download-only-480808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:00:05.348978   16358 out.go:169] MINIKUBE_LOCATION=17711
	W1206 18:00:05.347302   16358 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 18:00:05.347362   16358 notify.go:220] Checking for updates...
	I1206 18:00:05.351945   16358 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:00:05.353577   16358 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:00:05.355120   16358 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:00:05.356678   16358 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:00:05.359391   16358 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:00:05.359666   16358 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:00:05.381198   16358 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:00:05.381310   16358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:05.734839   16358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-06 18:00:05.726726064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:05.734973   16358 docker.go:295] overlay module found
	I1206 18:00:05.737063   16358 out.go:97] Using the docker driver based on user configuration
	I1206 18:00:05.737087   16358 start.go:298] selected driver: docker
	I1206 18:00:05.737094   16358 start.go:902] validating driver "docker" against <nil>
	I1206 18:00:05.737200   16358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:05.792360   16358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-06 18:00:05.784204032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:05.792514   16358 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1206 18:00:05.792987   16358 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1206 18:00:05.793148   16358 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 18:00:05.795395   16358 out.go:169] Using Docker driver with root privileges
	I1206 18:00:05.797050   16358 cni.go:84] Creating CNI manager for ""
	I1206 18:00:05.797071   16358 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:00:05.797082   16358 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 18:00:05.797095   16358 start_flags.go:323] config:
	{Name:download-only-480808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-480808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:00:05.798920   16358 out.go:97] Starting control plane node download-only-480808 in cluster download-only-480808
	I1206 18:00:05.798933   16358 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:00:05.800518   16358 out.go:97] Pulling base image ...
	I1206 18:00:05.800540   16358 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 18:00:05.800583   16358 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:00:05.815367   16358 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f to local cache
	I1206 18:00:05.815547   16358 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory
	I1206 18:00:05.815659   16358 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f to local cache
	I1206 18:00:05.823267   16358 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:05.823288   16358 cache.go:56] Caching tarball of preloaded images
	I1206 18:00:05.823463   16358 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1206 18:00:05.825756   16358 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1206 18:00:05.825778   16358 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:05.850350   16358 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.337053751s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480808
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480808: exit status 85 (70.633183ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:00:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:00:15.402048   16515 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:00:15.402303   16515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:15.402312   16515 out.go:309] Setting ErrFile to fd 2...
	I1206 18:00:15.402316   16515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:15.402480   16515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	W1206 18:00:15.402579   16515 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: open /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: no such file or directory
	I1206 18:00:15.402974   16515 out.go:303] Setting JSON to true
	I1206 18:00:15.403776   16515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2564,"bootTime":1701883051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:00:15.403838   16515 start.go:138] virtualization: kvm guest
	I1206 18:00:15.406388   16515 out.go:97] [download-only-480808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:00:15.408082   16515 out.go:169] MINIKUBE_LOCATION=17711
	I1206 18:00:15.406575   16515 notify.go:220] Checking for updates...
	I1206 18:00:15.411270   16515 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:00:15.412933   16515 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:00:15.414458   16515 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:00:15.415899   16515 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:00:15.418929   16515 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:00:15.419392   16515 config.go:182] Loaded profile config "download-only-480808": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1206 18:00:15.419440   16515 start.go:810] api.Load failed for download-only-480808: filestore "download-only-480808": Docker machine "download-only-480808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:00:15.419515   16515 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:00:15.419548   16515 start.go:810] api.Load failed for download-only-480808: filestore "download-only-480808": Docker machine "download-only-480808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:00:15.439448   16515 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:00:15.439540   16515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:15.490799   16515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:15.482792483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:15.490895   16515 docker.go:295] overlay module found
	I1206 18:00:15.493081   16515 out.go:97] Using the docker driver based on existing profile
	I1206 18:00:15.493113   16515 start.go:298] selected driver: docker
	I1206 18:00:15.493118   16515 start.go:902] validating driver "docker" against &{Name:download-only-480808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-480808 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:00:15.493269   16515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:15.544883   16515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:15.53724509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:15.545522   16515 cni.go:84] Creating CNI manager for ""
	I1206 18:00:15.545541   16515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:00:15.545553   16515 start_flags.go:323] config:
	{Name:download-only-480808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-480808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1206 18:00:15.547596   16515 out.go:97] Starting control plane node download-only-480808 in cluster download-only-480808
	I1206 18:00:15.547615   16515 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:00:15.549258   16515 out.go:97] Pulling base image ...
	I1206 18:00:15.549283   16515 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:15.549411   16515 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:00:15.564139   16515 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f to local cache
	I1206 18:00:15.564308   16515 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory
	I1206 18:00:15.564329   16515 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory, skipping pull
	I1206 18:00:15.564335   16515 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in cache, skipping pull
	I1206 18:00:15.564346   16515 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f as a tarball
	I1206 18:00:15.568646   16515 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:15.568669   16515 cache.go:56] Caching tarball of preloaded images
	I1206 18:00:15.568806   16515 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:15.570862   16515 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1206 18:00:15.570885   16515 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:15.595030   16515 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:19.125242   16515 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:19.125356   16515 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:20.066063   16515 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1206 18:00:20.066202   16515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/download-only-480808/config.json ...
	I1206 18:00:20.066388   16515 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1206 18:00:20.066572   16515 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (8.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480808 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.216421082s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (8.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480808
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480808: exit status 85 (75.625494ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-480808 | jenkins | v1.32.0 | 06 Dec 23 18:00 UTC |          |
	|         | -p download-only-480808           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/06 18:00:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 18:00:20.812330   16657 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:00:20.812490   16657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:20.812499   16657 out.go:309] Setting ErrFile to fd 2...
	I1206 18:00:20.812504   16657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:00:20.812671   16657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	W1206 18:00:20.812780   16657 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: open /home/jenkins/minikube-integration/17711-9529/.minikube/config/config.json: no such file or directory
	I1206 18:00:20.813176   16657 out.go:303] Setting JSON to true
	I1206 18:00:20.813973   16657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2570,"bootTime":1701883051,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:00:20.814037   16657 start.go:138] virtualization: kvm guest
	I1206 18:00:20.816736   16657 out.go:97] [download-only-480808] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:00:20.818588   16657 out.go:169] MINIKUBE_LOCATION=17711
	I1206 18:00:20.816934   16657 notify.go:220] Checking for updates...
	I1206 18:00:20.821810   16657 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:00:20.823407   16657 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:00:20.824913   16657 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:00:20.826279   16657 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 18:00:20.828764   16657 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 18:00:20.829243   16657 config.go:182] Loaded profile config "download-only-480808": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1206 18:00:20.829283   16657 start.go:810] api.Load failed for download-only-480808: filestore "download-only-480808": Docker machine "download-only-480808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:00:20.829370   16657 driver.go:392] Setting default libvirt URI to qemu:///system
	W1206 18:00:20.829402   16657 start.go:810] api.Load failed for download-only-480808: filestore "download-only-480808": Docker machine "download-only-480808" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1206 18:00:20.850680   16657 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:00:20.850785   16657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:20.903357   16657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:20.894930004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:20.903463   16657 docker.go:295] overlay module found
	I1206 18:00:20.905687   16657 out.go:97] Using the docker driver based on existing profile
	I1206 18:00:20.905710   16657 start.go:298] selected driver: docker
	I1206 18:00:20.905715   16657 start.go:902] validating driver "docker" against &{Name:download-only-480808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-480808 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:00:20.905856   16657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:00:20.959149   16657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-06 18:00:20.951092731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:00:20.959867   16657 cni.go:84] Creating CNI manager for ""
	I1206 18:00:20.959890   16657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 18:00:20.959906   16657 start_flags.go:323] config:
	{Name:download-only-480808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-480808 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1206 18:00:20.962273   16657 out.go:97] Starting control plane node download-only-480808 in cluster download-only-480808
	I1206 18:00:20.962306   16657 cache.go:121] Beginning downloading kic base image for docker with crio
	I1206 18:00:20.963970   16657 out.go:97] Pulling base image ...
	I1206 18:00:20.964001   16657 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:00:20.964141   16657 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local docker daemon
	I1206 18:00:20.979658   16657 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f to local cache
	I1206 18:00:20.979781   16657 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory
	I1206 18:00:20.979797   16657 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f in local cache directory, skipping pull
	I1206 18:00:20.979801   16657 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f exists in cache, skipping pull
	I1206 18:00:20.979816   16657 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f as a tarball
	I1206 18:00:20.987205   16657 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:20.987230   16657 cache.go:56] Caching tarball of preloaded images
	I1206 18:00:20.987346   16657 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:00:20.989703   16657 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1206 18:00:20.989731   16657 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:21.017642   16657 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1206 18:00:24.982514   16657 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:24.982612   16657 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17711-9529/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1206 18:00:25.803834   16657 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1206 18:00:25.803972   16657 profile.go:148] Saving config to /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/download-only-480808/config.json ...
	I1206 18:00:25.804183   16657 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1206 18:00:25.804384   16657 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17711-9529/.minikube/cache/linux/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480808"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-480808
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-248947 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-248947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-248947
--- PASS: TestDownloadOnlyKic (1.28s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-582843 --alsologtostderr --binary-mirror http://127.0.0.1:46823 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-582843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-582843
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (88.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-996697 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-996697 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m22.078425451s)
helpers_test.go:175: Cleaning up "offline-crio-996697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-996697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-996697: (6.6513059s)
--- PASS: TestOffline (88.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-906021
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-906021: exit status 85 (65.336037ms)

                                                
                                                
-- stdout --
	* Profile "addons-906021" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-906021"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-906021
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-906021: exit status 85 (67.801591ms)

                                                
                                                
-- stdout --
	* Profile "addons-906021" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-906021"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (132.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-906021 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-906021 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.839299001s)
--- PASS: TestAddons/Setup (132.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.835763ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xw24r" [82901825-2736-48f6-872f-0b11f797e48d] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011240179s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6qg5h" [66569529-08b4-49b4-b8d3-adc07070b1c8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013113368s
addons_test.go:339: (dbg) Run:  kubectl --context addons-906021 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-906021 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-906021 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.626161632s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 ip
2023/12/06 18:02:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lltzp" [ecc23ce6-0308-4dfd-8a1b-652f696a4330] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.02038118s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-906021
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-906021: (5.659893613s)
--- PASS: TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 68.709257ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dvqrm" [009f5378-9bf7-4107-ba9e-30c7fa55e4ff] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014910846s
addons_test.go:414: (dbg) Run:  kubectl --context addons-906021 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.393999ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-tmmzw" [9f43c435-3e04-42b6-9440-5f692aa79d97] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012096215s
addons_test.go:472: (dbg) Run:  kubectl --context addons-906021 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-906021 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.398195924s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p addons-906021 addons disable helm-tiller --alsologtostderr -v=1: (1.487663536s)
--- PASS: TestAddons/parallel/HelmTiller (9.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (83.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 14.101329ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-906021 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-906021 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [696b9a05-2e4d-427c-9338-0d33cdb1fa46] Pending
helpers_test.go:344: "task-pv-pod" [696b9a05-2e4d-427c-9338-0d33cdb1fa46] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [696b9a05-2e4d-427c-9338-0d33cdb1fa46] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.010874679s
addons_test.go:583: (dbg) Run:  kubectl --context addons-906021 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-906021 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-906021 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-906021 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-906021 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-906021 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-906021 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-906021 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [292661bc-7e1b-4d33-9a53-fc10f0b3baa0] Pending
helpers_test.go:344: "task-pv-pod-restore" [292661bc-7e1b-4d33-9a53-fc10f0b3baa0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [292661bc-7e1b-4d33-9a53-fc10f0b3baa0] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.009092716s
addons_test.go:625: (dbg) Run:  kubectl --context addons-906021 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-906021 delete pod task-pv-pod-restore: (1.069544311s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-906021 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-906021 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-906021 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.599939829s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (83.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-906021 --alsologtostderr -v=1
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-s2zs8" [73ebf29e-1b34-4bd8-8065-25a01f1bcb37] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-s2zs8" [73ebf29e-1b34-4bd8-8065-25a01f1bcb37] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008562692s
--- PASS: TestAddons/parallel/Headlamp (10.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-txpxb" [53481ccc-ae62-41df-ba98-27e85e32615e] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010064587s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-906021
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-906021 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-906021 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a4c21b2d-42c9-4c59-aeb5-6cf96848e830] Pending
helpers_test.go:344: "test-local-path" [a4c21b2d-42c9-4c59-aeb5-6cf96848e830] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a4c21b2d-42c9-4c59-aeb5-6cf96848e830] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a4c21b2d-42c9-4c59-aeb5-6cf96848e830] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.008911484s
addons_test.go:890: (dbg) Run:  kubectl --context addons-906021 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 ssh "cat /opt/local-path-provisioner/pvc-204e6b15-ce12-41b4-aed1-14c06d79cf42_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-906021 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-906021 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-906021 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mfv8h" [e1933ed1-4726-4a86-86e6-0753ce7d0f72] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.020345718s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-906021
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-906021 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-906021 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-906021
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-906021: (11.904890038s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-906021
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-906021
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-906021
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (27.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-080699 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-080699 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.785258118s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-080699 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-080699 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-080699 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-080699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-080699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-080699: (2.073927212s)
--- PASS: TestCertOptions (27.57s)

                                                
                                    
x
+
TestCertExpiration (225.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-585263 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-585263 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.677564956s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-585263 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1206 18:32:17.434527   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-585263 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.658559553s)
helpers_test.go:175: Cleaning up "cert-expiration-585263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-585263
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-585263: (2.35276749s)
--- PASS: TestCertExpiration (225.69s)

                                                
                                    
x
+
TestForceSystemdFlag (39.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-263093 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-263093 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.43461086s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-263093 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-263093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-263093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-263093: (2.384192874s)
--- PASS: TestForceSystemdFlag (39.13s)

                                                
                                    
x
+
TestForceSystemdEnv (38.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-059651 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-059651 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.21465913s)
helpers_test.go:175: Cleaning up "force-systemd-env-059651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-059651
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-059651: (2.715565452s)
--- PASS: TestForceSystemdEnv (38.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.48s)

                                                
                                    
x
+
TestErrorSpam/setup (20.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-671321 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671321 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-671321 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-671321 --driver=docker  --container-runtime=crio: (20.431811633s)
--- PASS: TestErrorSpam/setup (20.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 stop: (1.200217445s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-671321 --log_dir /tmp/nospam-671321 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17711-9529/.minikube/files/etc/test/nested/copy/16346/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-785345 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.865288367s)
--- PASS: TestFunctional/serial/StartWithProxy (66.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --alsologtostderr -v=8
E1206 18:07:44.480219   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.486075   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.496366   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.516682   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.556986   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.637291   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:44.797721   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:45.118280   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:45.758487   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:47.039654   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:49.599855   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:07:54.720772   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:08:04.961607   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-785345 --alsologtostderr -v=8: (35.137958901s)
functional_test.go:659: soft start took 35.138612956s for "functional-785345" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-785345 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-785345 /tmp/TestFunctionalserialCacheCmdcacheadd_local1375857973/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache add minikube-local-cache-test:functional-785345
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache delete minikube-local-cache-test:functional-785345
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-785345
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.930362ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 kubectl -- --context functional-785345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-785345 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 18:08:25.442102   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-785345 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.472781687s)
functional_test.go:757: restart took 29.472964542s for "functional-785345" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-785345 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 logs: (1.380692182s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 logs --file /tmp/TestFunctionalserialLogsFileCmd783575135/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 logs --file /tmp/TestFunctionalserialLogsFileCmd783575135/001/logs.txt: (1.386995244s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-785345 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-785345
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-785345: exit status 115 (340.379878ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30871 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-785345 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-785345 delete -f testdata/invalidsvc.yaml: (1.036456517s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 config get cpus: exit status 14 (77.112113ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 config get cpus: exit status 14 (71.84245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-785345 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-785345 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52285: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-785345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (173.863591ms)

                                                
                                                
-- stdout --
	* [functional-785345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:09:12.267262   51352 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:09:12.267535   51352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:12.267547   51352 out.go:309] Setting ErrFile to fd 2...
	I1206 18:09:12.267555   51352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:12.267808   51352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:09:12.268506   51352 out.go:303] Setting JSON to false
	I1206 18:09:12.269733   51352 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3101,"bootTime":1701883051,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:09:12.269802   51352 start.go:138] virtualization: kvm guest
	I1206 18:09:12.272016   51352 out.go:177] * [functional-785345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:09:12.273762   51352 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:09:12.273793   51352 notify.go:220] Checking for updates...
	I1206 18:09:12.275487   51352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:09:12.276912   51352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:09:12.278460   51352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:09:12.280656   51352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:09:12.282105   51352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:09:12.283898   51352 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:09:12.284419   51352 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:09:12.307546   51352 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:09:12.307689   51352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:09:12.366287   51352 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-06 18:09:12.357316495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:09:12.366382   51352 docker.go:295] overlay module found
	I1206 18:09:12.368409   51352 out.go:177] * Using the docker driver based on existing profile
	I1206 18:09:12.369776   51352 start.go:298] selected driver: docker
	I1206 18:09:12.369799   51352 start.go:902] validating driver "docker" against &{Name:functional-785345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-785345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:09:12.369911   51352 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:09:12.372206   51352 out.go:177] 
	W1206 18:09:12.373675   51352 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 18:09:12.375042   51352 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-785345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-785345 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.030176ms)

                                                
                                                
-- stdout --
	* [functional-785345] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:09:12.081111   51255 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:09:12.081318   51255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:12.081330   51255 out.go:309] Setting ErrFile to fd 2...
	I1206 18:09:12.081336   51255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:09:12.081629   51255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:09:12.082234   51255 out.go:303] Setting JSON to false
	I1206 18:09:12.083337   51255 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3101,"bootTime":1701883051,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:09:12.083406   51255 start.go:138] virtualization: kvm guest
	I1206 18:09:12.086501   51255 out.go:177] * [functional-785345] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1206 18:09:12.088494   51255 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:09:12.088493   51255 notify.go:220] Checking for updates...
	I1206 18:09:12.090429   51255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:09:12.092165   51255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:09:12.093701   51255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:09:12.095245   51255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:09:12.096680   51255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:09:12.100357   51255 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:09:12.101076   51255 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:09:12.128408   51255 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:09:12.128525   51255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:09:12.194101   51255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-06 18:09:12.183717796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:09:12.194245   51255 docker.go:295] overlay module found
	I1206 18:09:12.196362   51255 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1206 18:09:12.198317   51255 start.go:298] selected driver: docker
	I1206 18:09:12.198339   51255 start.go:902] validating driver "docker" against &{Name:functional-785345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701685682-17711@sha256:83c739eb138050ac22ab4acdb4b94720ad0623257a780b5e2621b741c3dbbf2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-785345 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1206 18:09:12.198465   51255 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:09:12.200740   51255 out.go:177] 
	W1206 18:09:12.202077   51255 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 18:09:12.203413   51255 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-785345 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-785345 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4k8b6" [5047bc02-f0f8-4c6b-80a4-b80e2ad46905] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-4k8b6" [5047bc02-f0f8-4c6b-80a4-b80e2ad46905] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.009382782s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31279
functional_test.go:1674: http://192.168.49.2:31279: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-4k8b6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31279
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7cf50ddc-877e-45fb-be01-1649519bfcb5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0206495s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-785345 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-785345 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-785345 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-785345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9fae13bb-87df-4e1b-92c0-1e7028078b20] Pending
helpers_test.go:344: "sp-pod" [9fae13bb-87df-4e1b-92c0-1e7028078b20] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9fae13bb-87df-4e1b-92c0-1e7028078b20] Running
E1206 18:09:06.402433   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.011264257s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-785345 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-785345 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-785345 delete -f testdata/storage-provisioner/pod.yaml: (1.280689533s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-785345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [57667b73-ab9d-48ac-8c38-7f44eb9fe138] Pending
helpers_test.go:344: "sp-pod" [57667b73-ab9d-48ac-8c38-7f44eb9fe138] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009958866s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-785345 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh -n functional-785345 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 cp functional-785345:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2217031248/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh -n functional-785345 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-785345 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pw299" [a9aa03aa-9d49-44d0-94d5-756a60912423] Pending
helpers_test.go:344: "mysql-859648c796-pw299" [a9aa03aa-9d49-44d0-94d5-756a60912423] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pw299" [a9aa03aa-9d49-44d0-94d5-756a60912423] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.013215291s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785345 exec mysql-859648c796-pw299 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-785345 exec mysql-859648c796-pw299 -- mysql -ppassword -e "show databases;": exit status 1 (125.299596ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-785345 exec mysql-859648c796-pw299 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16346/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /etc/test/nested/copy/16346/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16346.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /etc/ssl/certs/16346.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16346.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /usr/share/ca-certificates/16346.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/163462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /etc/ssl/certs/163462.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/163462.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /usr/share/ca-certificates/163462.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-785345 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "sudo systemctl is-active docker": exit status 1 (279.516778ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "sudo systemctl is-active containerd": exit status 1 (280.587684ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785345 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-785345
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785345 image ls --format short --alsologtostderr:
I1206 18:09:20.341173   54772 out.go:296] Setting OutFile to fd 1 ...
I1206 18:09:20.341328   54772 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.341338   54772 out.go:309] Setting ErrFile to fd 2...
I1206 18:09:20.341343   54772 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.341780   54772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
I1206 18:09:20.342470   54772 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.342608   54772 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.343106   54772 cli_runner.go:164] Run: docker container inspect functional-785345 --format={{.State.Status}}
I1206 18:09:20.360984   54772 ssh_runner.go:195] Run: systemctl --version
I1206 18:09:20.361066   54772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-785345
I1206 18:09:20.379127   54772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/functional-785345/id_rsa Username:docker}
I1206 18:09:20.464868   54772 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 47568: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785345 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | alpine             | 01e5c69afaf63 | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-785345  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785345 image ls --format table --alsologtostderr:
I1206 18:09:20.951167   55080 out.go:296] Setting OutFile to fd 1 ...
I1206 18:09:20.951311   55080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.951324   55080 out.go:309] Setting ErrFile to fd 2...
I1206 18:09:20.951331   55080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.951541   55080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
I1206 18:09:20.952126   55080 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.952233   55080 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.952790   55080 cli_runner.go:164] Run: docker container inspect functional-785345 --format={{.State.Status}}
I1206 18:09:20.972307   55080 ssh_runner.go:195] Run: systemctl --version
I1206 18:09:20.972372   55080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-785345
I1206 18:09:20.989262   55080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/functional-785345/id_rsa Username:docker}
I1206 18:09:21.073323   55080 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785345 image ls --format json --alsologtostderr:
[{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],
"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":["docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc","docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44421929"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfb
be753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-785345"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10
.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"
id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121
f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/
dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785345 image ls --format json --alsologtostderr:
I1206 18:09:20.731923   54999 out.go:296] Setting OutFile to fd 1 ...
I1206 18:09:20.732054   54999 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.732064   54999 out.go:309] Setting ErrFile to fd 2...
I1206 18:09:20.732068   54999 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.732383   54999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
I1206 18:09:20.733240   54999 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.733396   54999 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.733988   54999 cli_runner.go:164] Run: docker container inspect functional-785345 --format={{.State.Status}}
I1206 18:09:20.750541   54999 ssh_runner.go:195] Run: systemctl --version
I1206 18:09:20.750586   54999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-785345
I1206 18:09:20.769826   54999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/functional-785345/id_rsa Username:docker}
I1206 18:09:20.856848   54999 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785345 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
- docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc
repoTags:
- docker.io/library/nginx:alpine
size: "44421929"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-785345
size: "34114467"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785345 image ls --format yaml --alsologtostderr:
I1206 18:09:20.499425   54862 out.go:296] Setting OutFile to fd 1 ...
I1206 18:09:20.499566   54862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.499579   54862 out.go:309] Setting ErrFile to fd 2...
I1206 18:09:20.499586   54862 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.499791   54862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
I1206 18:09:20.500528   54862 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.500678   54862 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.501258   54862 cli_runner.go:164] Run: docker container inspect functional-785345 --format={{.State.Status}}
I1206 18:09:20.520503   54862 ssh_runner.go:195] Run: systemctl --version
I1206 18:09:20.520545   54862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-785345
I1206 18:09:20.538544   54862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/functional-785345/id_rsa Username:docker}
I1206 18:09:20.629057   54862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh pgrep buildkitd: exit status 1 (254.648038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image build -t localhost/my-image:functional-785345 testdata/build --alsologtostderr
2023/12/06 18:09:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image build -t localhost/my-image:functional-785345 testdata/build --alsologtostderr: (2.478517715s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-785345 image build -t localhost/my-image:functional-785345 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8b17813019e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-785345
--> c61fef10642
Successfully tagged localhost/my-image:functional-785345
c61fef106425dd5dd06f659cde81403f84e418455ec91b0ce326a03427f9128b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-785345 image build -t localhost/my-image:functional-785345 testdata/build --alsologtostderr:
I1206 18:09:20.820263   55031 out.go:296] Setting OutFile to fd 1 ...
I1206 18:09:20.820582   55031 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.820594   55031 out.go:309] Setting ErrFile to fd 2...
I1206 18:09:20.820598   55031 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1206 18:09:20.820842   55031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
I1206 18:09:20.821551   55031 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.822161   55031 config.go:182] Loaded profile config "functional-785345": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1206 18:09:20.822581   55031 cli_runner.go:164] Run: docker container inspect functional-785345 --format={{.State.Status}}
I1206 18:09:20.841845   55031 ssh_runner.go:195] Run: systemctl --version
I1206 18:09:20.841909   55031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-785345
I1206 18:09:20.860419   55031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/functional-785345/id_rsa Username:docker}
I1206 18:09:20.944373   55031 build_images.go:151] Building image from path: /tmp/build.3518947376.tar
I1206 18:09:20.944446   55031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 18:09:20.952428   55031 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3518947376.tar
I1206 18:09:20.955770   55031 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3518947376.tar: stat -c "%s %y" /var/lib/minikube/build/build.3518947376.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3518947376.tar': No such file or directory
I1206 18:09:20.955805   55031 ssh_runner.go:362] scp /tmp/build.3518947376.tar --> /var/lib/minikube/build/build.3518947376.tar (3072 bytes)
I1206 18:09:20.979788   55031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3518947376
I1206 18:09:20.988340   55031 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3518947376 -xf /var/lib/minikube/build/build.3518947376.tar
I1206 18:09:20.996590   55031 crio.go:297] Building image: /var/lib/minikube/build/build.3518947376
I1206 18:09:20.996659   55031 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-785345 /var/lib/minikube/build/build.3518947376 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 18:09:23.213558   55031 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-785345 /var/lib/minikube/build/build.3518947376 --cgroup-manager=cgroupfs: (2.216862517s)
I1206 18:09:23.213623   55031 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3518947376
I1206 18:09:23.222969   55031 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3518947376.tar
I1206 18:09:23.231530   55031 build_images.go:207] Built localhost/my-image:functional-785345 from /tmp/build.3518947376.tar
I1206 18:09:23.231567   55031 build_images.go:123] succeeded building to: functional-785345
I1206 18:09:23.231573   55031 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.013615903s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-785345
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-785345 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [30c49bbc-93a2-4769-80bf-e1db33237235] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [30c49bbc-93a2-4769-80bf-e1db33237235] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.01403574s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr: (5.016712626s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr: (3.781269601s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-785345 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.112.242 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-785345 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-785345 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-785345 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wp55s" [3d266acf-03cb-4d1f-a93f-051e007fba4a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wp55s" [3d266acf-03cb-4d1f-a93f-051e007fba4a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.013982662s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-785345
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image load --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr: (3.91777392s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "316.519726ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.785292ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "280.056719ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "60.189466ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdany-port3730627221/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701886148238866237" to /tmp/TestFunctionalparallelMountCmdany-port3730627221/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701886148238866237" to /tmp/TestFunctionalparallelMountCmdany-port3730627221/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701886148238866237" to /tmp/TestFunctionalparallelMountCmdany-port3730627221/001/test-1701886148238866237
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.615727ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 18:09 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 18:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 18:09 test-1701886148238866237
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh cat /mount-9p/test-1701886148238866237
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-785345 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f93292e5-b2b5-4d54-bf92-e895b07b7f01] Pending
helpers_test.go:344: "busybox-mount" [f93292e5-b2b5-4d54-bf92-e895b07b7f01] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f93292e5-b2b5-4d54-bf92-e895b07b7f01] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f93292e5-b2b5-4d54-bf92-e895b07b7f01] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.015107266s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-785345 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdany-port3730627221/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image save gcr.io/google-containers/addon-resizer:functional-785345 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image rm gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.049764175s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service list -o json
functional_test.go:1493: Took "521.998587ms" to run "out/minikube-linux-amd64 -p functional-785345 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30139
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-785345
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 image save --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-785345 image save --daemon gcr.io/google-containers/addon-resizer:functional-785345 --alsologtostderr: (1.114078096s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-785345
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30139
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdspecific-port3074613532/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (336.434618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdspecific-port3074613532/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "sudo umount -f /mount-9p": exit status 1 (308.295376ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-785345 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdspecific-port3074613532/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T" /mount1: exit status 1 (389.146569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-785345 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-785345 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-785345 /tmp/TestFunctionalparallelMountCmdVerifyCleanup818948762/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-785345
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-785345
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-785345
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (62.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-099068 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1206 18:10:28.322751   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-099068 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m2.419242542s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (62.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons enable ingress --alsologtostderr -v=5: (8.336926349s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-099068 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-068453 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1206 18:14:03.144512   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:14:13.385507   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:14:33.866517   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-068453 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.755644925s)
--- PASS: TestJSONOutput/start/Command (66.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-068453 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-068453 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-068453 --output=json --user=testUser
E1206 18:15:14.826748   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-068453 --output=json --user=testUser: (5.742949096s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-686582 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-686582 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.109567ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"93aba1f3-240a-4fdb-8444-8b2443d34a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-686582] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fe618db-eae8-4eda-9eb2-d720c49a0d0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17711"}}
	{"specversion":"1.0","id":"e70dd6c7-0cdc-4b21-9aac-8d7daf5fd58d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"625e9551-c309-4413-804d-829fad331540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig"}}
	{"specversion":"1.0","id":"5aa2f972-b41e-4ad0-bffd-5eb6e20d28df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube"}}
	{"specversion":"1.0","id":"41eb79bf-28e8-4bc1-b6b6-74d6f5a8b93c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"916345b2-cfb9-4225-b533-d5635fdb83e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"314e2d9e-c476-49c0-9f40-9d6a22d0a4a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-686582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-686582
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-678355 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-678355 --network=: (32.052910651s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-678355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-678355
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-678355: (1.994716507s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-251285 --network=bridge
E1206 18:15:54.389733   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.395012   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.405312   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.425573   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.465867   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.546334   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:54.706762   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:55.027341   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:55.667618   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:56.948148   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:15:59.508904   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:16:04.630071   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:16:14.871093   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-251285 --network=bridge: (21.639988771s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-251285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-251285
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-251285: (1.944549884s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.60s)

                                                
                                    
x
+
TestKicExistingNetwork (24.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-174889 --network=existing-network
E1206 18:16:35.351851   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:16:36.747408   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-174889 --network=existing-network: (22.185348324s)
helpers_test.go:175: Cleaning up "existing-network-174889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-174889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-174889: (1.923839542s)
--- PASS: TestKicExistingNetwork (24.24s)

                                                
                                    
x
+
TestKicCustomSubnet (26.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-998355 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-998355 --subnet=192.168.60.0/24: (24.777684511s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-998355 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-998355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-998355
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-998355: (2.017185686s)
--- PASS: TestKicCustomSubnet (26.81s)

                                                
                                    
x
+
TestKicStaticIP (27.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-305234 --static-ip=192.168.200.200
E1206 18:17:16.312423   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-305234 --static-ip=192.168.200.200: (24.906769159s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-305234 ip
helpers_test.go:175: Cleaning up "static-ip-305234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-305234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-305234: (2.093907953s)
--- PASS: TestKicStaticIP (27.14s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (54.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-697687 --driver=docker  --container-runtime=crio
E1206 18:17:44.480442   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-697687 --driver=docker  --container-runtime=crio: (25.338898442s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-701619 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-701619 --driver=docker  --container-runtime=crio: (23.955655886s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-697687
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-701619
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-701619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-701619
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-701619: (1.871974762s)
helpers_test.go:175: Cleaning up "first-697687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-697687
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-697687: (2.237692667s)
--- PASS: TestMinikubeProfile (54.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-303672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-303672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.061619375s)
E1206 18:18:38.233075   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (8.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-303672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-323219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-323219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.227963528s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-323219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-303672 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-303672 --alsologtostderr -v=5: (1.625542638s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-323219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-323219
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-323219: (1.201683162s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-323219
E1206 18:18:52.904203   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-323219: (5.872018822s)
--- PASS: TestMountStart/serial/RestartStopped (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-323219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (87.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1206 18:19:20.588110   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.750560665s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (87.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-193731 -- rollout status deployment/busybox: (2.396055128s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-5kkfq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-193731 -- exec busybox-5bc68d56bd-k9dh8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-193731 -v 3 --alsologtostderr
E1206 18:20:54.389461   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-193731 -v 3 --alsologtostderr: (45.335181544s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-193731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp testdata/cp-test.txt multinode-193731:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2069695983/001/cp-test_multinode-193731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731:/home/docker/cp-test.txt multinode-193731-m02:/home/docker/cp-test_multinode-193731_multinode-193731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test.txt"
E1206 18:21:22.074004   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test_multinode-193731_multinode-193731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731:/home/docker/cp-test.txt multinode-193731-m03:/home/docker/cp-test_multinode-193731_multinode-193731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test_multinode-193731_multinode-193731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp testdata/cp-test.txt multinode-193731-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2069695983/001/cp-test_multinode-193731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m02:/home/docker/cp-test.txt multinode-193731:/home/docker/cp-test_multinode-193731-m02_multinode-193731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test_multinode-193731-m02_multinode-193731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m02:/home/docker/cp-test.txt multinode-193731-m03:/home/docker/cp-test_multinode-193731-m02_multinode-193731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test_multinode-193731-m02_multinode-193731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp testdata/cp-test.txt multinode-193731-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2069695983/001/cp-test_multinode-193731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m03:/home/docker/cp-test.txt multinode-193731:/home/docker/cp-test_multinode-193731-m03_multinode-193731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731 "sudo cat /home/docker/cp-test_multinode-193731-m03_multinode-193731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 cp multinode-193731-m03:/home/docker/cp-test.txt multinode-193731-m02:/home/docker/cp-test_multinode-193731-m03_multinode-193731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 ssh -n multinode-193731-m02 "sudo cat /home/docker/cp-test_multinode-193731-m03_multinode-193731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-193731 node stop m03: (1.210961899s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193731 status: exit status 7 (457.75624ms)

                                                
                                                
-- stdout --
	multinode-193731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr: exit status 7 (457.41033ms)

                                                
                                                
-- stdout --
	multinode-193731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:21:30.842910  115595 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:21:30.843153  115595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:21:30.843161  115595 out.go:309] Setting ErrFile to fd 2...
	I1206 18:21:30.843166  115595 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:21:30.843356  115595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:21:30.843533  115595 out.go:303] Setting JSON to false
	I1206 18:21:30.843565  115595 mustload.go:65] Loading cluster: multinode-193731
	I1206 18:21:30.843669  115595 notify.go:220] Checking for updates...
	I1206 18:21:30.843930  115595 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:21:30.843943  115595 status.go:255] checking status of multinode-193731 ...
	I1206 18:21:30.844332  115595 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:21:30.860669  115595 status.go:330] multinode-193731 host status = "Running" (err=<nil>)
	I1206 18:21:30.860692  115595 host.go:66] Checking if "multinode-193731" exists ...
	I1206 18:21:30.860946  115595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731
	I1206 18:21:30.877925  115595 host.go:66] Checking if "multinode-193731" exists ...
	I1206 18:21:30.878173  115595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:21:30.878222  115595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731
	I1206 18:21:30.893820  115595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731/id_rsa Username:docker}
	I1206 18:21:30.981098  115595 ssh_runner.go:195] Run: systemctl --version
	I1206 18:21:30.984869  115595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:21:30.994726  115595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:21:31.044162  115595 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-06 18:21:31.036079067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:21:31.044712  115595 kubeconfig.go:92] found "multinode-193731" server: "https://192.168.58.2:8443"
	I1206 18:21:31.044737  115595 api_server.go:166] Checking apiserver status ...
	I1206 18:21:31.044768  115595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 18:21:31.054758  115595 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I1206 18:21:31.063334  115595 api_server.go:182] apiserver freezer: "4:freezer:/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio/crio-3080b8188463da5a9a4e3a8e0bc2b5a608dad9b0c569055363f7367594f6798b"
	I1206 18:21:31.063394  115595 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e4beb39a8487084792d37f763875640422f1379fcdbeb484b1c036bdbb8c4efa/crio/crio-3080b8188463da5a9a4e3a8e0bc2b5a608dad9b0c569055363f7367594f6798b/freezer.state
	I1206 18:21:31.070892  115595 api_server.go:204] freezer state: "THAWED"
	I1206 18:21:31.070919  115595 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1206 18:21:31.074978  115595 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1206 18:21:31.075006  115595 status.go:421] multinode-193731 apiserver status = Running (err=<nil>)
	I1206 18:21:31.075015  115595 status.go:257] multinode-193731 status: &{Name:multinode-193731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 18:21:31.075029  115595 status.go:255] checking status of multinode-193731-m02 ...
	I1206 18:21:31.075284  115595 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Status}}
	I1206 18:21:31.092057  115595 status.go:330] multinode-193731-m02 host status = "Running" (err=<nil>)
	I1206 18:21:31.092082  115595 host.go:66] Checking if "multinode-193731-m02" exists ...
	I1206 18:21:31.092378  115595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-193731-m02
	I1206 18:21:31.108843  115595 host.go:66] Checking if "multinode-193731-m02" exists ...
	I1206 18:21:31.109494  115595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 18:21:31.109547  115595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-193731-m02
	I1206 18:21:31.128561  115595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17711-9529/.minikube/machines/multinode-193731-m02/id_rsa Username:docker}
	I1206 18:21:31.213044  115595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 18:21:31.223387  115595 status.go:257] multinode-193731-m02 status: &{Name:multinode-193731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 18:21:31.223434  115595 status.go:255] checking status of multinode-193731-m03 ...
	I1206 18:21:31.223728  115595 cli_runner.go:164] Run: docker container inspect multinode-193731-m03 --format={{.State.Status}}
	I1206 18:21:31.239821  115595 status.go:330] multinode-193731-m03 host status = "Stopped" (err=<nil>)
	I1206 18:21:31.239846  115595 status.go:343] host is not running, skipping remaining checks
	I1206 18:21:31.239854  115595 status.go:257] multinode-193731-m03 status: &{Name:multinode-193731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-193731 node start m03 --alsologtostderr: (10.036664252s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193731
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-193731
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-193731: (24.830244709s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193731 --wait=true -v=8 --alsologtostderr
E1206 18:22:44.480133   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193731 --wait=true -v=8 --alsologtostderr: (1m31.627256286s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-193731 node delete m03: (4.097994924s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 stop
E1206 18:23:52.905130   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-193731 stop: (23.634619615s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193731 status: exit status 7 (96.403829ms)

                                                
                                                
-- stdout --
	multinode-193731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr: exit status 7 (92.15407ms)

                                                
                                                
-- stdout --
	multinode-193731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:24:06.979937  125900 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:24:06.980079  125900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:24:06.980088  125900 out.go:309] Setting ErrFile to fd 2...
	I1206 18:24:06.980092  125900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:24:06.980328  125900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:24:06.980505  125900 out.go:303] Setting JSON to false
	I1206 18:24:06.980539  125900 mustload.go:65] Loading cluster: multinode-193731
	I1206 18:24:06.980588  125900 notify.go:220] Checking for updates...
	I1206 18:24:06.980953  125900 config.go:182] Loaded profile config "multinode-193731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:24:06.980965  125900 status.go:255] checking status of multinode-193731 ...
	I1206 18:24:06.981374  125900 cli_runner.go:164] Run: docker container inspect multinode-193731 --format={{.State.Status}}
	I1206 18:24:06.999745  125900 status.go:330] multinode-193731 host status = "Stopped" (err=<nil>)
	I1206 18:24:06.999769  125900 status.go:343] host is not running, skipping remaining checks
	I1206 18:24:06.999775  125900 status.go:257] multinode-193731 status: &{Name:multinode-193731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 18:24:06.999800  125900 status.go:255] checking status of multinode-193731-m02 ...
	I1206 18:24:07.000036  125900 cli_runner.go:164] Run: docker container inspect multinode-193731-m02 --format={{.State.Status}}
	I1206 18:24:07.015923  125900 status.go:330] multinode-193731-m02 host status = "Stopped" (err=<nil>)
	I1206 18:24:07.015950  125900 status.go:343] host is not running, skipping remaining checks
	I1206 18:24:07.015957  125900 status.go:257] multinode-193731-m02 status: &{Name:multinode-193731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193731 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1206 18:24:07.523697   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193731 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.854632894s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-193731 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-193731
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193731-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-193731-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.979246ms)

                                                
                                                
-- stdout --
	* [multinode-193731-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-193731-m02' is duplicated with machine name 'multinode-193731-m02' in profile 'multinode-193731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-193731-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-193731-m03 --driver=docker  --container-runtime=crio: (24.138559567s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-193731
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-193731: exit status 80 (283.874818ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-193731
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-193731-m03 already exists in multinode-193731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-193731-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-193731-m03: (1.876227796s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.44s)

                                                
                                    
x
+
TestScheduledStopUnix (97.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-773846 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-773846 --memory=2048 --driver=docker  --container-runtime=crio: (21.796095424s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773846 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-773846 -n scheduled-stop-773846
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773846 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773846 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773846 -n scheduled-stop-773846
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-773846
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-773846 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1206 18:27:44.479484   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-773846
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-773846: exit status 7 (77.296217ms)

                                                
                                                
-- stdout --
	scheduled-stop-773846
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773846 -n scheduled-stop-773846
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-773846 -n scheduled-stop-773846: exit status 7 (76.193496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-773846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-773846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-773846: (4.251454379s)
--- PASS: TestScheduledStopUnix (97.48s)

                                                
                                    
x
+
TestInsufficientStorage (10.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-543617 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-543617 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.898857761s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"404fa5c9-42f1-4274-87eb-89cbfabd72a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-543617] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df255b4c-501c-40a2-8da1-5990fd0a076f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17711"}}
	{"specversion":"1.0","id":"bb49f3c4-28ab-4c03-aa40-a5f1217bbecb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0bb9f4fc-d5a5-4b82-8a5d-a74c49791a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig"}}
	{"specversion":"1.0","id":"74bb1194-ef15-4a28-9682-198de0318f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube"}}
	{"specversion":"1.0","id":"9cc1c1cc-eccb-44b9-8f72-ebc465cd9fd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"613eeba9-4bdc-4987-b76e-578e686076f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0502d06-64a4-422d-b23b-32d68ce54888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7a68dd76-8891-42ff-bb63-f8d515336c79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3747a868-fc0d-4a3c-9dd9-a7b8e419436c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"646723b9-9b45-4a68-b862-d84aac4ac406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"be964fb7-2c1b-4b9b-8647-ab110fa6626d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-543617 in cluster insufficient-storage-543617","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c3e4ce6-10eb-4ad4-8625-d16003c0096c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbc46997-d365-47ac-a07e-79aa966d236b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"97fb607c-d5e3-4051-b21f-ce55ebddea2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-543617 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-543617 --output=json --layout=cluster: exit status 7 (269.086139ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-543617","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-543617","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 18:28:07.609166  142808 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-543617" does not appear in /home/jenkins/minikube-integration/17711-9529/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-543617 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-543617 --output=json --layout=cluster: exit status 7 (274.531915ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-543617","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-543617","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 18:28:07.884401  142893 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-543617" does not appear in /home/jenkins/minikube-integration/17711-9529/kubeconfig
	E1206 18:28:07.894020  142893 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/insufficient-storage-543617/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-543617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-543617
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-543617: (1.839898663s)
--- PASS: TestInsufficientStorage (10.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 18:30:15.948765   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.416094908s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-125688
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-125688: (4.805179087s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-125688 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-125688 status --format={{.Host}}: exit status 7 (95.239548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.51833237s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-125688 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (89.863466ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-125688] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-125688
	    minikube start -p kubernetes-upgrade-125688 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1256882 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-125688 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-125688 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.640802903s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-125688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-125688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-125688: (2.444053581s)
--- PASS: TestKubernetesUpgrade (354.07s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.3879866004.exe start -p missing-upgrade-872237 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.3879866004.exe start -p missing-upgrade-872237 --memory=2200 --driver=docker  --container-runtime=crio: (1m18.770557473s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-872237
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-872237: (2.857332035s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-872237
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-872237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-872237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.701374077s)
helpers_test.go:175: Cleaning up "missing-upgrade-872237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-872237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-872237: (5.91598149s)
--- PASS: TestMissingContainerUpgrade (142.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (117.62537ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-045981] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045981 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045981 --driver=docker  --container-runtime=crio: (35.42205176s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045981 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --driver=docker  --container-runtime=crio: (3.842593214s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-045981 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-045981 status -o json: exit status 2 (328.231984ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-045981","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-045981
E1206 18:28:52.903015   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-045981: (6.430154221s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045981 --no-kubernetes --driver=docker  --container-runtime=crio: (5.06971654s)
--- PASS: TestNoKubernetes/serial/Start (5.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045981 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045981 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.495402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-045981
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-045981: (1.233326983s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-045981 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-045981 --driver=docker  --container-runtime=crio: (6.580233493s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-045981 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-045981 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.106762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-291578 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-291578 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (170.000748ms)

                                                
                                                
-- stdout --
	* [false-291578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 18:29:17.145572  165477 out.go:296] Setting OutFile to fd 1 ...
	I1206 18:29:17.145857  165477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:29:17.145865  165477 out.go:309] Setting ErrFile to fd 2...
	I1206 18:29:17.145870  165477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1206 18:29:17.146066  165477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17711-9529/.minikube/bin
	I1206 18:29:17.146686  165477 out.go:303] Setting JSON to false
	I1206 18:29:17.148020  165477 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4306,"bootTime":1701883051,"procs":501,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 18:29:17.148112  165477 start.go:138] virtualization: kvm guest
	I1206 18:29:17.150786  165477 out.go:177] * [false-291578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1206 18:29:17.152418  165477 out.go:177]   - MINIKUBE_LOCATION=17711
	I1206 18:29:17.153959  165477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 18:29:17.152352  165477 notify.go:220] Checking for updates...
	I1206 18:29:17.156763  165477 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17711-9529/kubeconfig
	I1206 18:29:17.158266  165477 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17711-9529/.minikube
	I1206 18:29:17.159792  165477 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 18:29:17.161307  165477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 18:29:17.163354  165477 config.go:182] Loaded profile config "cert-expiration-585263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:29:17.163488  165477 config.go:182] Loaded profile config "offline-crio-996697": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1206 18:29:17.163586  165477 driver.go:392] Setting default libvirt URI to qemu:///system
	I1206 18:29:17.186211  165477 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1206 18:29:17.186362  165477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 18:29:17.243436  165477 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-06 18:29:17.23293528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 18:29:17.243545  165477 docker.go:295] overlay module found
	I1206 18:29:17.245712  165477 out.go:177] * Using the docker driver based on user configuration
	I1206 18:29:17.247270  165477 start.go:298] selected driver: docker
	I1206 18:29:17.247287  165477 start.go:902] validating driver "docker" against <nil>
	I1206 18:29:17.247298  165477 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 18:29:17.249922  165477 out.go:177] 
	W1206 18:29:17.251498  165477 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 18:29:17.253074  165477 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-291578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-585263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-996697
contexts:
- context:
cluster: cert-expiration-585263
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-585263
name: cert-expiration-585263
- context:
cluster: offline-crio-996697
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: offline-crio-996697
name: offline-crio-996697
current-context: cert-expiration-585263
kind: Config
preferences: {}
users:
- name: cert-expiration-585263
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.key
- name: offline-crio-996697
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-291578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-291578"

                                                
                                                
----------------------- debugLogs end: false-291578 [took: 3.709516252s] --------------------------------
helpers_test.go:175: Cleaning up "false-291578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-291578
--- PASS: TestNetworkPlugins/group/false (4.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-444504
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.54s)

                                                
                                    
x
+
TestPause/serial/Start (47.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-735914 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-735914 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.099776549s)
--- PASS: TestPause/serial/Start (47.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m11.891971672s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m6.374450067s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-735914 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 18:32:44.480500   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-735914 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.043174661s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.05s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-735914 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-735914 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-735914 --output=json --layout=cluster: exit status 2 (358.29057ms)

                                                
                                                
-- stdout --
	{"Name":"pause-735914","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-735914","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-735914 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-735914 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-735914 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-735914 --alsologtostderr -v=5: (2.687736512s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-735914
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-735914: exit status 1 (18.021121ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-735914: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.660757471s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkn7h" [3d61f8aa-171a-463d-96f2-ef8ac9eb673b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkn7h" [3d61f8aa-171a-463d-96f2-ef8ac9eb673b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.010647585s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w4w98" [09034b8c-a33f-4d91-99ef-fb70502bdfae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01936889s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.619854304s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6x4fz" [adadb462-1bb5-43ac-9f00-f45b08757b30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 18:33:52.903301   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6x4fz" [adadb462-1bb5-43ac-9f00-f45b08757b30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.008932949s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hccnt" [c25c7fd3-8325-4eb0-86e2-114dc165ddbb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.022201635s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.49371839s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nhznf" [51d0af15-fc1c-4a71-afa1-c88951ad5d79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nhznf" [51d0af15-fc1c-4a71-afa1-c88951ad5d79] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.009996437s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-46xdm" [46b6d88f-060c-4a29-b45e-969962ff74bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-46xdm" [46b6d88f-060c-4a29-b45e-969962ff74bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.009092106s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.161480904s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xll7d" [b62f9c25-d907-425f-8735-59c877af5956] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xll7d" [b62f9c25-d907-425f-8735-59c877af5956] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.011454409s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-291578 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.537766075s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (108.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-843989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m48.391254204s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (108.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-538632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1206 18:35:54.389408   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-538632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m21.578851329s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gxqvh" [41c9a766-66cd-4ce2-bc4b-8de9b42448fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gxqvh" [41c9a766-66cd-4ce2-bc4b-8de9b42448fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.01113045s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hhj7s" [7096a87e-9f1d-4755-8c3c-e2115fe9affd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020156829s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-291578 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-291578 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-j4lvd" [5454a5dd-da98-41f4-9099-caf0de2fb075] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-j4lvd" [5454a5dd-da98-41f4-9099-caf0de2fb075] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.010608046s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-291578 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-291578 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)
E1206 18:43:39.767816   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:43:40.736896   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:43:42.003511   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:43:42.546084   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-352438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-352438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (45.101542269s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-821365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-821365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m8.492273309s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-538632 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb2202e1-a0a9-4217-a3ab-da2b5174ac17] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bb2202e1-a0a9-4217-a3ab-da2b5174ac17] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.016181309s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-538632 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-538632 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-538632 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-538632 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-538632 --alsologtostderr -v=3: (12.028173594s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-352438 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [73009798-d976-44d3-ab13-451ece209b6d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [73009798-d976-44d3-ab13-451ece209b6d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.015619476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-352438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-538632 -n no-preload-538632
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-538632 -n no-preload-538632: exit status 7 (79.46987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-538632 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-538632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-538632 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (5m35.286499694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-538632 -n no-preload-538632
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-843989 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c01e7b22-cdb5-4003-8e7e-a81b60a64b08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c01e7b22-cdb5-4003-8e7e-a81b60a64b08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014312158s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-843989 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-352438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-352438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-352438 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-352438 --alsologtostderr -v=3: (11.963927565s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843989 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-843989 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-843989 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-843989 --alsologtostderr -v=3: (12.097868594s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352438 -n embed-certs-352438
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352438 -n embed-certs-352438: exit status 7 (78.575948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-352438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-352438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-352438 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m35.221950989s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352438 -n embed-certs-352438
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843989 -n old-k8s-version-843989
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843989 -n old-k8s-version-843989: exit status 7 (109.851371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-843989 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (433.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1206 18:37:44.480104   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-843989 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m13.608571942s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843989 -n old-k8s-version-843989
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (433.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-821365 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61ef31a2-4204-4aee-a50c-c534634923f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61ef31a2-4204-4aee-a50c-c534634923f5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.015711727s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-821365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-821365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-821365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-821365 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-821365 --alsologtostderr -v=3: (12.179113982s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365: exit status 7 (92.329469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-821365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-821365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1206 18:38:14.862417   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:14.867701   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:14.877985   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:14.898269   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:14.938552   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:15.018834   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:15.179251   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:15.499993   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:16.141162   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:17.422284   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:19.982616   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:25.102913   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:35.343595   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:38:40.736365   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:40.741641   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:40.752640   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:40.772860   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:40.813182   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:40.893569   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:41.053975   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:41.374555   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:42.015269   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:43.296457   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:45.856986   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:50.977342   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:38:52.903585   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
E1206 18:38:55.824399   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:39:01.218349   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:39:15.886564   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:15.891831   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:15.902069   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:15.922343   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:15.962831   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:16.043161   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:16.203593   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:16.524143   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:17.164868   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:18.445413   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:21.005978   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:21.698778   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:39:26.126551   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:36.367481   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:36.784867   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:39:40.719962   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:40.725278   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:40.736127   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:40.756367   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:40.796652   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:40.877427   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:41.037826   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:41.358096   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:41.998760   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:43.279237   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:45.840134   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:50.960835   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:39:56.847619   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:39:59.672377   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.677625   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.687895   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.708157   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.748449   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.828732   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:39:59.989341   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:00.309987   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:00.950552   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:01.201951   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:40:02.231643   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:02.659159   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:40:04.792032   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:09.913130   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:20.154010   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:21.682586   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:40:37.808444   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:40:40.634349   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:40:47.524334   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
E1206 18:40:54.389638   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/ingress-addon-legacy-099068/client.crt: no such file or directory
E1206 18:40:55.924623   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:55.929923   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:55.940248   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:55.960563   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:56.000879   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:56.081198   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:56.241443   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:56.562031   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:57.202191   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:58.159873   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.165142   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.175451   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.195742   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.236031   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.316353   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.476770   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:58.482949   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:40:58.705244   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
E1206 18:40:58.797420   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:40:59.438384   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:00.718796   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:01.043343   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:41:02.642849   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:41:03.279549   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:06.164234   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:41:08.400625   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:16.405197   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:41:18.641491   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:21.594520   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:41:24.580095   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
E1206 18:41:36.885648   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:41:39.122187   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:41:59.729300   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/calico-291578/client.crt: no such file or directory
E1206 18:42:17.846859   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/bridge-291578/client.crt: no such file or directory
E1206 18:42:20.082835   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/flannel-291578/client.crt: no such file or directory
E1206 18:42:24.563558   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/custom-flannel-291578/client.crt: no such file or directory
E1206 18:42:43.514892   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
E1206 18:42:44.480240   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/addons-906021/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-821365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m41.429653138s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vr7dd" [386a752d-34e0-4ca1-91d2-50115c04ab0e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vr7dd" [386a752d-34e0-4ca1-91d2-50115c04ab0e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.017458662s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vr7dd" [386a752d-34e0-4ca1-91d2-50115c04ab0e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009484178s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-538632 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-538632 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-538632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-538632 -n no-preload-538632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-538632 -n no-preload-538632: exit status 2 (410.534316ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-538632 -n no-preload-538632
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-538632 -n no-preload-538632: exit status 2 (406.418763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-538632 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-538632 -n no-preload-538632
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-538632 -n no-preload-538632
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jpknd" [9ea99ce8-e8e8-4281-b4a9-7f67321a7a81] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jpknd" [9ea99ce8-e8e8-4281-b4a9-7f67321a7a81] Running
E1206 18:43:14.862457   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/auto-291578/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.021005521s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-808335 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-808335 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (35.216820944s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jpknd" [9ea99ce8-e8e8-4281-b4a9-7f67321a7a81] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011099973s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-352438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-352438 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-352438 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352438 -n embed-certs-352438
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352438 -n embed-certs-352438: exit status 2 (398.115414ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352438 -n embed-certs-352438
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352438 -n embed-certs-352438: exit status 2 (419.392784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-352438 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352438 -n embed-certs-352438
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352438 -n embed-certs-352438
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkzs7" [1f8422df-b599-4866-ae94-e6d185c179f8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkzs7" [1f8422df-b599-4866-ae94-e6d185c179f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.018343047s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-808335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-808335 --alsologtostderr -v=3
E1206 18:43:52.903137   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/functional-785345/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-808335 --alsologtostderr -v=3: (3.090806101s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-808335 -n newest-cni-808335
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-808335 -n newest-cni-808335: exit status 7 (78.177768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-808335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-808335 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-808335 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (24.808044951s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-808335 -n newest-cni-808335
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zkzs7" [1f8422df-b599-4866-ae94-e6d185c179f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0086592s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-821365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-821365 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-821365 --alsologtostderr -v=1
E1206 18:44:08.420282   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/kindnet-291578/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365: exit status 2 (297.747476ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365: exit status 2 (293.732444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-821365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-821365 -n default-k8s-diff-port-821365
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-808335 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-808335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-808335 -n newest-cni-808335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-808335 -n newest-cni-808335: exit status 2 (296.384861ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-808335 -n newest-cni-808335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-808335 -n newest-cni-808335: exit status 2 (298.11758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-808335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-808335 -n newest-cni-808335
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-808335 -n newest-cni-808335
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7fhwf" [ebafd6cf-71ae-4a7b-92fd-978e2611c38d] Running
E1206 18:44:59.671734   16346 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/enable-default-cni-291578/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015254762s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7fhwf" [ebafd6cf-71ae-4a7b-92fd-978e2611c38d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008636562s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-843989 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-843989 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-843989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843989 -n old-k8s-version-843989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843989 -n old-k8s-version-843989: exit status 2 (287.52706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843989 -n old-k8s-version-843989
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843989 -n old-k8s-version-843989: exit status 2 (285.452076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-843989 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843989 -n old-k8s-version-843989
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-843989 -n old-k8s-version-843989
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    

Test skip (27/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-291578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-585263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:12 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8555
name: cert-options-080699
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-996697
contexts:
- context:
cluster: cert-expiration-585263
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-585263
name: cert-expiration-585263
- context:
cluster: cert-options-080699
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:12 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-options-080699
name: cert-options-080699
- context:
cluster: offline-crio-996697
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: offline-crio-996697
name: offline-crio-996697
current-context: cert-expiration-585263
kind: Config
preferences: {}
users:
- name: cert-expiration-585263
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.key
- name: cert-options-080699
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-options-080699/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-options-080699/client.key
- name: offline-crio-996697
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-291578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-291578"

                                                
                                                
----------------------- debugLogs end: kubenet-291578 [took: 3.865773358s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-291578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-291578
--- SKIP: TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-291578 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-291578" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-585263
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17711-9529/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-996697
contexts:
- context:
cluster: cert-expiration-585263
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:29:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-585263
name: cert-expiration-585263
- context:
cluster: offline-crio-996697
extensions:
- extension:
last-update: Wed, 06 Dec 2023 18:28:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: offline-crio-996697
name: offline-crio-996697
current-context: cert-expiration-585263
kind: Config
preferences: {}
users:
- name: cert-expiration-585263
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/cert-expiration-585263/client.key
- name: offline-crio-996697
user:
client-certificate: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.crt
client-key: /home/jenkins/minikube-integration/17711-9529/.minikube/profiles/offline-crio-996697/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-291578

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-291578" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-291578"

                                                
                                                
----------------------- debugLogs end: cilium-291578 [took: 4.00917522s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-291578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-291578
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-621760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-621760
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard