Test Report: Docker_Linux_crio 17738

                    
                      8768890baa5a64021183265111cefbb8aeebcf2d:2023-12-08:32200
                    
                

Test fail (6/315)

Order failed test Duration
35 TestAddons/parallel/Ingress 154.52
36 TestAddons/parallel/InspektorGadget 7.89
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.63
216 TestMultiNode/serial/PingHostFrom2Pods 3.17
238 TestRunningBinaryUpgrade 93.6
253 TestStoppedBinaryUpgrade/Upgrade 96.88
x
+
TestAddons/parallel/Ingress (154.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-766826 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-766826 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-766826 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2f43e8f3-b864-47fb-9ed0-22d1c06b4980] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2f43e8f3-b864-47fb-9ed0-22d1c06b4980] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010019373s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-766826 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.343519346s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-766826 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 addons disable ingress-dns --alsologtostderr -v=1: (1.652638788s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 addons disable ingress --alsologtostderr -v=1: (7.622784108s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-766826
helpers_test.go:235: (dbg) docker inspect addons-766826:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788",
	        "Created": "2023-12-08T18:10:56.666593963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-08T18:10:56.948123685Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7e83e141d5f1084600bb5c7d15c9e2fd69083458051c2cf9d21dfd6243a0ff9b",
	        "ResolvConfPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/hostname",
	        "HostsPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/hosts",
	        "LogPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788-json.log",
	        "Name": "/addons-766826",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-766826:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-766826",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b-init/diff:/var/lib/docker/overlay2/f01fd4b86350391aeb4ddce306a73284c32c8168179c226f9bf8857f27cbe54b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-766826",
	                "Source": "/var/lib/docker/volumes/addons-766826/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-766826",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-766826",
	                "name.minikube.sigs.k8s.io": "addons-766826",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822a4e1dc2929e050de2cb01d72854eda554c5cebb70a24475a0143ca1d46572",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/822a4e1dc292",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-766826": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "543daae92b3e",
	                        "addons-766826"
	                    ],
	                    "NetworkID": "e81a6b26a78ebb03e2e0e03e51afee0a8a4d0b13ed68dae384bb8b39b45b41b6",
	                    "EndpointID": "8c5edae405e0e69ef25f05f02022bf3ddbd04dc0eedd2f0098b9037dc7d3e67a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-766826 -n addons-766826
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 logs -n 25: (1.202280738s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-892064                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| delete  | -p download-only-892064                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| start   | --download-only -p                                                                          | download-docker-819225 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | download-docker-819225                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-819225                                                                   | download-docker-819225 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-908328   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | binary-mirror-908328                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44187                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-908328                                                                     | binary-mirror-908328   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| addons  | enable dashboard -p                                                                         | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-766826 --wait=true                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-766826 ssh cat                                                                       | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | /opt/local-path-provisioner/pvc-de77890f-3fa6-42c6-805e-20b83a22f899_default_test-pvc/file1 |                        |         |         |                     |                     |
	| ip      | addons-766826 ip                                                                            | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-766826 addons                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | -p addons-766826                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-766826 ssh curl -s                                                                   | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-766826 addons                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-766826 addons                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | -p addons-766826                                                                            |                        |         |         |                     |                     |
	| ip      | addons-766826 ip                                                                            | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:15 UTC | 08 Dec 23 18:15 UTC |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:15 UTC | 08 Dec 23 18:15 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:15 UTC | 08 Dec 23 18:15 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:10:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:10:35.398019  344702 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:10:35.398146  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:35.398153  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:10:35.398158  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:35.398328  344702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:10:35.398918  344702 out.go:303] Setting JSON to false
	I1208 18:10:35.399763  344702 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6735,"bootTime":1702052300,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:10:35.399822  344702 start.go:138] virtualization: kvm guest
	I1208 18:10:35.401933  344702 out.go:177] * [addons-766826] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:10:35.403254  344702 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:10:35.403316  344702 notify.go:220] Checking for updates...
	I1208 18:10:35.404495  344702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:10:35.405732  344702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:10:35.406964  344702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:10:35.408228  344702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:10:35.409433  344702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:10:35.410772  344702 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:10:35.430876  344702 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:10:35.431013  344702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:35.479450  344702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:35.471411869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:35.479541  344702 docker.go:295] overlay module found
	I1208 18:10:35.481483  344702 out.go:177] * Using the docker driver based on user configuration
	I1208 18:10:35.482846  344702 start.go:298] selected driver: docker
	I1208 18:10:35.482866  344702 start.go:902] validating driver "docker" against <nil>
	I1208 18:10:35.482876  344702 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:10:35.483681  344702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:35.531533  344702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:35.523779373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:35.531734  344702 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 18:10:35.531937  344702 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 18:10:35.533803  344702 out.go:177] * Using Docker driver with root privileges
	I1208 18:10:35.535160  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:10:35.535182  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:35.535195  344702 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 18:10:35.535218  344702 start_flags.go:323] config:
	{Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:10:35.536634  344702 out.go:177] * Starting control plane node addons-766826 in cluster addons-766826
	I1208 18:10:35.537797  344702 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:10:35.539036  344702 out.go:177] * Pulling base image ...
	I1208 18:10:35.540348  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:35.540405  344702 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:35.540416  344702 cache.go:56] Caching tarball of preloaded images
	I1208 18:10:35.540438  344702 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:10:35.540495  344702 preload.go:174] Found /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 18:10:35.540505  344702 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1208 18:10:35.540924  344702 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json ...
	I1208 18:10:35.540950  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json: {Name:mk1b44e8663c9d9f9ecd1a043dd0e150fd90a0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:10:35.554684  344702 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:35.554808  344702 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory
	I1208 18:10:35.554823  344702 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory, skipping pull
	I1208 18:10:35.554828  344702 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in cache, skipping pull
	I1208 18:10:35.554838  344702 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 as a tarball
	I1208 18:10:35.554843  344702 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 from local cache
	I1208 18:10:48.227022  344702 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 from cached tarball
	I1208 18:10:48.227072  344702 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:10:48.227171  344702 start.go:365] acquiring machines lock for addons-766826: {Name:mkd33173a289aa7ad362ea3ee90ba26cfce28fce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:10:48.227293  344702 start.go:369] acquired machines lock for "addons-766826" in 94.671µs
	I1208 18:10:48.227322  344702 start.go:93] Provisioning new machine with config: &{Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:10:48.227417  344702 start.go:125] createHost starting for "" (driver="docker")
	I1208 18:10:48.298822  344702 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1208 18:10:48.299136  344702 start.go:159] libmachine.API.Create for "addons-766826" (driver="docker")
	I1208 18:10:48.299170  344702 client.go:168] LocalClient.Create starting
	I1208 18:10:48.299324  344702 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem
	I1208 18:10:48.601721  344702 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem
	I1208 18:10:48.729267  344702 cli_runner.go:164] Run: docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 18:10:48.745586  344702 cli_runner.go:211] docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 18:10:48.745662  344702 network_create.go:281] running [docker network inspect addons-766826] to gather additional debugging logs...
	I1208 18:10:48.745683  344702 cli_runner.go:164] Run: docker network inspect addons-766826
	W1208 18:10:48.762060  344702 cli_runner.go:211] docker network inspect addons-766826 returned with exit code 1
	I1208 18:10:48.762092  344702 network_create.go:284] error running [docker network inspect addons-766826]: docker network inspect addons-766826: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-766826 not found
	I1208 18:10:48.762112  344702 network_create.go:286] output of [docker network inspect addons-766826]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-766826 not found
	
	** /stderr **
	I1208 18:10:48.762230  344702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:10:48.778846  344702 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002d4c200}
	I1208 18:10:48.778903  344702 network_create.go:124] attempt to create docker network addons-766826 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 18:10:48.778973  344702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-766826 addons-766826
	I1208 18:10:49.044212  344702 network_create.go:108] docker network addons-766826 192.168.49.0/24 created
	I1208 18:10:49.044255  344702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-766826" container
	I1208 18:10:49.044330  344702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 18:10:49.058979  344702 cli_runner.go:164] Run: docker volume create addons-766826 --label name.minikube.sigs.k8s.io=addons-766826 --label created_by.minikube.sigs.k8s.io=true
	I1208 18:10:49.163699  344702 oci.go:103] Successfully created a docker volume addons-766826
	I1208 18:10:49.163837  344702 cli_runner.go:164] Run: docker run --rm --name addons-766826-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --entrypoint /usr/bin/test -v addons-766826:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib
	I1208 18:10:51.365190  344702 cli_runner.go:217] Completed: docker run --rm --name addons-766826-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --entrypoint /usr/bin/test -v addons-766826:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib: (2.201295404s)
	I1208 18:10:51.365224  344702 oci.go:107] Successfully prepared a docker volume addons-766826
	I1208 18:10:51.365264  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:51.365291  344702 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 18:10:51.365349  344702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-766826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 18:10:56.604017  344702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-766826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.238609551s)
	I1208 18:10:56.604048  344702 kic.go:203] duration metric: took 5.238755 seconds to extract preloaded images to volume
	W1208 18:10:56.604182  344702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 18:10:56.604276  344702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 18:10:56.652550  344702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-766826 --name addons-766826 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-766826 --network addons-766826 --ip 192.168.49.2 --volume addons-766826:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:10:56.956167  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Running}}
	I1208 18:10:56.972820  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:56.990438  344702 cli_runner.go:164] Run: docker exec addons-766826 stat /var/lib/dpkg/alternatives/iptables
	I1208 18:10:57.050090  344702 oci.go:144] the created container "addons-766826" has a running status.
	I1208 18:10:57.050131  344702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa...
	I1208 18:10:57.373393  344702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 18:10:57.392698  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:57.408611  344702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 18:10:57.408636  344702 kic_runner.go:114] Args: [docker exec --privileged addons-766826 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 18:10:57.496117  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:57.513251  344702 machine.go:88] provisioning docker machine ...
	I1208 18:10:57.513328  344702 ubuntu.go:169] provisioning hostname "addons-766826"
	I1208 18:10:57.513449  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:57.534604  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:57.535208  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:57.535238  344702 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-766826 && echo "addons-766826" | sudo tee /etc/hostname
	I1208 18:10:57.669551  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-766826
	
	I1208 18:10:57.669639  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:57.688461  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:57.688818  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:57.688869  344702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-766826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-766826/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-766826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:10:57.810491  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:10:57.810520  344702 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:10:57.810557  344702 ubuntu.go:177] setting up certificates
	I1208 18:10:57.810573  344702 provision.go:83] configureAuth start
	I1208 18:10:57.810630  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:57.827338  344702 provision.go:138] copyHostCerts
	I1208 18:10:57.827414  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:10:57.827534  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:10:57.827607  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:10:57.827664  344702 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.addons-766826 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-766826]
	I1208 18:10:58.037009  344702 provision.go:172] copyRemoteCerts
	I1208 18:10:58.037084  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:10:58.037151  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.053414  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.142608  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:10:58.163569  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 18:10:58.184504  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 18:10:58.205867  344702 provision.go:86] duration metric: configureAuth took 395.27283ms
	I1208 18:10:58.205901  344702 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:10:58.206085  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:10:58.206207  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.222680  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:58.223008  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:58.223024  344702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:10:58.431598  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:10:58.431631  344702 machine.go:91] provisioned docker machine in 918.353023ms
	I1208 18:10:58.431643  344702 client.go:171] LocalClient.Create took 10.132465458s
	I1208 18:10:58.431666  344702 start.go:167] duration metric: libmachine.API.Create for "addons-766826" took 10.132532785s
	I1208 18:10:58.431709  344702 start.go:300] post-start starting for "addons-766826" (driver="docker")
	I1208 18:10:58.431725  344702 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:10:58.431808  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:10:58.431862  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.448069  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.539521  344702 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:10:58.542607  344702 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:10:58.542652  344702 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:10:58.542672  344702 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:10:58.542686  344702 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1208 18:10:58.542703  344702 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:10:58.542782  344702 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:10:58.542816  344702 start.go:303] post-start completed in 111.096153ms
	I1208 18:10:58.543275  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:58.560849  344702 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json ...
	I1208 18:10:58.561143  344702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:10:58.561192  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.577486  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.663444  344702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:10:58.667582  344702 start.go:128] duration metric: createHost completed in 10.44014647s
	I1208 18:10:58.667610  344702 start.go:83] releasing machines lock for "addons-766826", held for 10.440304486s
	I1208 18:10:58.667684  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:58.683757  344702 ssh_runner.go:195] Run: cat /version.json
	I1208 18:10:58.683808  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.683849  344702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:10:58.683916  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.699728  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.701078  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.785963  344702 ssh_runner.go:195] Run: systemctl --version
	I1208 18:10:58.790076  344702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:10:58.927543  344702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:10:58.931811  344702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:10:58.949255  344702 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:10:58.949329  344702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:10:58.975521  344702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1208 18:10:58.975549  344702 start.go:475] detecting cgroup driver to use...
	I1208 18:10:58.975580  344702 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:10:58.975617  344702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:10:58.989892  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:10:58.999944  344702 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:10:58.999993  344702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:10:59.012312  344702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:10:59.024673  344702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 18:10:59.105246  344702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:10:59.179052  344702 docker.go:219] disabling docker service ...
	I1208 18:10:59.179107  344702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:10:59.196586  344702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:10:59.206693  344702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:10:59.279463  344702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:10:59.355864  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:10:59.365812  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:10:59.379389  344702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1208 18:10:59.379439  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.387854  344702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 18:10:59.387924  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.396805  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.405024  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.413445  344702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 18:10:59.421205  344702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 18:10:59.428490  344702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 18:10:59.435628  344702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 18:10:59.507579  344702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 18:10:59.596865  344702 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 18:10:59.596963  344702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 18:10:59.600306  344702 start.go:543] Will wait 60s for crictl version
	I1208 18:10:59.600353  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:10:59.603421  344702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 18:10:59.635598  344702 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1208 18:10:59.635700  344702 ssh_runner.go:195] Run: crio --version
	I1208 18:10:59.668608  344702 ssh_runner.go:195] Run: crio --version
	I1208 18:10:59.704008  344702 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1208 18:10:59.705641  344702 cli_runner.go:164] Run: docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:10:59.721774  344702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 18:10:59.725227  344702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:10:59.735439  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:59.735498  344702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:10:59.791378  344702 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:10:59.791402  344702 crio.go:415] Images already preloaded, skipping extraction
	I1208 18:10:59.791449  344702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:10:59.822941  344702 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:10:59.822965  344702 cache_images.go:84] Images are preloaded, skipping loading
	I1208 18:10:59.823026  344702 ssh_runner.go:195] Run: crio config
	I1208 18:10:59.863332  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:10:59.863354  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:59.863380  344702 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1208 18:10:59.863401  344702 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-766826 NodeName:addons-766826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 18:10:59.863518  344702 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-766826"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 18:10:59.863574  344702 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-766826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1208 18:10:59.863621  344702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1208 18:10:59.871681  344702 binaries.go:44] Found k8s binaries, skipping transfer
	I1208 18:10:59.871743  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 18:10:59.879250  344702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1208 18:10:59.894835  344702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 18:10:59.910586  344702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1208 18:10:59.926440  344702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 18:10:59.929507  344702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:10:59.938941  344702 certs.go:56] Setting up /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826 for IP: 192.168.49.2
	I1208 18:10:59.938985  344702 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5abf3d3db90d2494e2d623a52fec5b2843f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:10:59.939117  344702 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key
	I1208 18:11:00.347543  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt ...
	I1208 18:11:00.347573  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt: {Name:mkebb9c5ec660f8fb0fbef25138a9307f3148dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.347743  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key ...
	I1208 18:11:00.347753  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key: {Name:mk73d5996c1cb7cf921d1e1a76c3fe7bb86b939e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.347818  344702 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key
	I1208 18:11:00.665798  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt ...
	I1208 18:11:00.665834  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt: {Name:mk0b4c28708e258b8bcb9b9d5175dc48cfb0f674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.666004  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key ...
	I1208 18:11:00.666015  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key: {Name:mk3cb7c4892d2ce7791c43b3da5dddfa48505634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.666115  344702 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key
	I1208 18:11:00.666128  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt with IP's: []
	I1208 18:11:00.990301  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt ...
	I1208 18:11:00.990339  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: {Name:mkacdf54e0bb0d02b559b4a566313eb2d9b0bf5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.990555  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key ...
	I1208 18:11:00.990573  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key: {Name:mk56890f5b5a4234858be8e78aeac0be5f06b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.990653  344702 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2
	I1208 18:11:00.990668  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1208 18:11:01.119977  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 ...
	I1208 18:11:01.120008  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2: {Name:mk0176b352b16a5010d95b2c8e2593ced4cb0475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.120161  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2 ...
	I1208 18:11:01.120179  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2: {Name:mkbf46e45710c65630c3d9932836e6cd5d5904d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.120245  344702 certs.go:337] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt
	I1208 18:11:01.120308  344702 certs.go:341] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key
	I1208 18:11:01.120349  344702 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key
	I1208 18:11:01.120365  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt with IP's: []
	I1208 18:11:01.318728  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt ...
	I1208 18:11:01.318770  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt: {Name:mke99fb8b56ae3f85a7ddbddf047a306784da1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.318979  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key ...
	I1208 18:11:01.318998  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key: {Name:mk6a9744bbcfaa7ae2890dd4bb3528ea3cafdae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.319214  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 18:11:01.319260  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem (1082 bytes)
	I1208 18:11:01.319290  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem (1123 bytes)
	I1208 18:11:01.319317  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem (1679 bytes)
	I1208 18:11:01.320069  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1208 18:11:01.342475  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 18:11:01.363813  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 18:11:01.384419  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 18:11:01.405349  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 18:11:01.426173  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 18:11:01.446820  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 18:11:01.467014  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 18:11:01.487776  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 18:11:01.508254  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 18:11:01.523286  344702 ssh_runner.go:195] Run: openssl version
	I1208 18:11:01.528147  344702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1208 18:11:01.536267  344702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.539262  344702 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.539309  344702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.545254  344702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1208 18:11:01.552995  344702 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1208 18:11:01.556162  344702 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:11:01.556249  344702 kubeadm.go:404] StartCluster: {Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:11:01.556338  344702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 18:11:01.556383  344702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 18:11:01.589317  344702 cri.go:89] found id: ""
	I1208 18:11:01.589385  344702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 18:11:01.597892  344702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 18:11:01.606145  344702 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1208 18:11:01.606213  344702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 18:11:01.614138  344702 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 18:11:01.614225  344702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 18:11:01.659847  344702 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1208 18:11:01.659925  344702 kubeadm.go:322] [preflight] Running pre-flight checks
	I1208 18:11:01.697087  344702 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:11:01.697179  344702 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:11:01.697231  344702 kubeadm.go:322] OS: Linux
	I1208 18:11:01.697295  344702 kubeadm.go:322] CGROUPS_CPU: enabled
	I1208 18:11:01.697366  344702 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1208 18:11:01.697457  344702 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1208 18:11:01.697512  344702 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1208 18:11:01.697563  344702 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1208 18:11:01.697613  344702 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1208 18:11:01.697688  344702 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1208 18:11:01.697763  344702 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1208 18:11:01.697855  344702 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1208 18:11:01.760710  344702 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 18:11:01.760873  344702 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 18:11:01.760981  344702 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 18:11:01.952375  344702 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 18:11:01.955791  344702 out.go:204]   - Generating certificates and keys ...
	I1208 18:11:01.955952  344702 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1208 18:11:01.956083  344702 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1208 18:11:02.123453  344702 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 18:11:02.236180  344702 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1208 18:11:02.368096  344702 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1208 18:11:02.524258  344702 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1208 18:11:02.705988  344702 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1208 18:11:02.706152  344702 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-766826 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:11:02.832331  344702 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1208 18:11:02.832499  344702 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-766826 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:11:03.006634  344702 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 18:11:03.056525  344702 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 18:11:03.149222  344702 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1208 18:11:03.149357  344702 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 18:11:03.252028  344702 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 18:11:03.363895  344702 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 18:11:03.577588  344702 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 18:11:03.742631  344702 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 18:11:03.743059  344702 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 18:11:03.746372  344702 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 18:11:03.748700  344702 out.go:204]   - Booting up control plane ...
	I1208 18:11:03.748828  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 18:11:03.748949  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 18:11:03.749032  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 18:11:03.756427  344702 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:11:03.757221  344702 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:11:03.757286  344702 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1208 18:11:03.833133  344702 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 18:11:08.835507  344702 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002365 seconds
	I1208 18:11:08.835671  344702 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 18:11:08.848399  344702 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 18:11:09.373835  344702 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 18:11:09.374132  344702 kubeadm.go:322] [mark-control-plane] Marking the node addons-766826 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 18:11:09.884612  344702 kubeadm.go:322] [bootstrap-token] Using token: xgtwvu.3ufmvdlgrs1fk56u
	I1208 18:11:09.886228  344702 out.go:204]   - Configuring RBAC rules ...
	I1208 18:11:09.886375  344702 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 18:11:09.891194  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 18:11:09.897658  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 18:11:09.900370  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 18:11:09.903046  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 18:11:09.905625  344702 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 18:11:09.915886  344702 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 18:11:10.133515  344702 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1208 18:11:10.325401  344702 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1208 18:11:10.326791  344702 kubeadm.go:322] 
	I1208 18:11:10.326893  344702 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1208 18:11:10.326905  344702 kubeadm.go:322] 
	I1208 18:11:10.327029  344702 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1208 18:11:10.327042  344702 kubeadm.go:322] 
	I1208 18:11:10.327079  344702 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1208 18:11:10.327174  344702 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 18:11:10.327246  344702 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 18:11:10.327257  344702 kubeadm.go:322] 
	I1208 18:11:10.327330  344702 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1208 18:11:10.327370  344702 kubeadm.go:322] 
	I1208 18:11:10.327455  344702 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 18:11:10.327469  344702 kubeadm.go:322] 
	I1208 18:11:10.327549  344702 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1208 18:11:10.327692  344702 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 18:11:10.327789  344702 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 18:11:10.327800  344702 kubeadm.go:322] 
	I1208 18:11:10.327939  344702 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 18:11:10.328054  344702 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1208 18:11:10.328077  344702 kubeadm.go:322] 
	I1208 18:11:10.328203  344702 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xgtwvu.3ufmvdlgrs1fk56u \
	I1208 18:11:10.328341  344702 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 \
	I1208 18:11:10.328370  344702 kubeadm.go:322] 	--control-plane 
	I1208 18:11:10.328377  344702 kubeadm.go:322] 
	I1208 18:11:10.328495  344702 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1208 18:11:10.328512  344702 kubeadm.go:322] 
	I1208 18:11:10.328646  344702 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xgtwvu.3ufmvdlgrs1fk56u \
	I1208 18:11:10.328799  344702 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:11:10.330441  344702 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:11:10.330614  344702 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:11:10.330656  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:11:10.330667  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:11:10.333332  344702 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1208 18:11:10.334794  344702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 18:11:10.339231  344702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1208 18:11:10.339251  344702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1208 18:11:10.357227  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 18:11:11.050501  344702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 18:11:11.050618  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.050618  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b minikube.k8s.io/name=addons-766826 minikube.k8s.io/updated_at=2023_12_08T18_11_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.058024  344702 ops.go:34] apiserver oom_adj: -16
	I1208 18:11:11.144393  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.221268  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.790315  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:12.290585  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:12.789888  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:13.289766  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:13.790647  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:14.289940  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:14.789778  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:15.290435  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:15.789697  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:16.290570  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:16.790307  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:17.290483  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:17.790701  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:18.289955  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:18.790534  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:19.290068  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:19.790488  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:20.290678  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:20.790507  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:21.290724  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:21.790683  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.289697  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.789755  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.856614  344702 kubeadm.go:1088] duration metric: took 11.806048867s to wait for elevateKubeSystemPrivileges.
	I1208 18:11:22.856657  344702 kubeadm.go:406] StartCluster complete in 21.300414231s
	I1208 18:11:22.856680  344702 settings.go:142] acquiring lock: {Name:mkb1d8fbfd540ec0ff42a4ec77782a6addbbad21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:22.856780  344702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:11:22.857145  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/kubeconfig: {Name:mk170d1df5bab3a276f3bc17a718825dd5b16d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:22.857327  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 18:11:22.857461  344702 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1208 18:11:22.857550  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:11:22.857559  344702 addons.go:69] Setting gcp-auth=true in profile "addons-766826"
	I1208 18:11:22.857575  344702 addons.go:69] Setting volumesnapshots=true in profile "addons-766826"
	I1208 18:11:22.857582  344702 addons.go:69] Setting metrics-server=true in profile "addons-766826"
	I1208 18:11:22.857591  344702 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-766826"
	I1208 18:11:22.857597  344702 addons.go:69] Setting cloud-spanner=true in profile "addons-766826"
	I1208 18:11:22.857604  344702 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-766826"
	I1208 18:11:22.857606  344702 addons.go:69] Setting default-storageclass=true in profile "addons-766826"
	I1208 18:11:22.857616  344702 addons.go:231] Setting addon metrics-server=true in "addons-766826"
	I1208 18:11:22.857622  344702 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-766826"
	I1208 18:11:22.857627  344702 addons.go:231] Setting addon cloud-spanner=true in "addons-766826"
	I1208 18:11:22.857643  344702 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-766826"
	I1208 18:11:22.857656  344702 addons.go:69] Setting storage-provisioner=true in profile "addons-766826"
	I1208 18:11:22.857615  344702 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-766826"
	I1208 18:11:22.857673  344702 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-766826"
	I1208 18:11:22.857683  344702 addons.go:231] Setting addon storage-provisioner=true in "addons-766826"
	I1208 18:11:22.857689  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857704  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857724  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857728  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857763  344702 addons.go:69] Setting ingress-dns=true in profile "addons-766826"
	I1208 18:11:22.857777  344702 addons.go:231] Setting addon ingress-dns=true in "addons-766826"
	I1208 18:11:22.857816  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857669  344702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-766826"
	I1208 18:11:22.857675  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.858101  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858210  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858229  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858229  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858248  344702 addons.go:69] Setting ingress=true in profile "addons-766826"
	I1208 18:11:22.858258  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858263  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858269  344702 addons.go:231] Setting addon ingress=true in "addons-766826"
	I1208 18:11:22.858314  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.858776  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858971  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.859107  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.859266  344702 addons.go:69] Setting registry=true in profile "addons-766826"
	I1208 18:11:22.859284  344702 addons.go:231] Setting addon registry=true in "addons-766826"
	I1208 18:11:22.859320  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.859683  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.857600  344702 mustload.go:65] Loading cluster: addons-766826
	I1208 18:11:22.860351  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:11:22.860632  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.861209  344702 addons.go:69] Setting helm-tiller=true in profile "addons-766826"
	I1208 18:11:22.861238  344702 addons.go:231] Setting addon helm-tiller=true in "addons-766826"
	I1208 18:11:22.861279  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.861689  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858778  344702 addons.go:69] Setting inspektor-gadget=true in profile "addons-766826"
	I1208 18:11:22.863568  344702 addons.go:231] Setting addon inspektor-gadget=true in "addons-766826"
	I1208 18:11:22.863654  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.864259  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.857596  344702 addons.go:231] Setting addon volumesnapshots=true in "addons-766826"
	I1208 18:11:22.864908  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.867403  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.891848  344702 out.go:177]   - Using image docker.io/registry:2.8.3
	I1208 18:11:22.893576  344702 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1208 18:11:22.895451  344702 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 18:11:22.895476  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1208 18:11:22.895536  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.900176  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 18:11:22.905964  344702 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-766826" context rescaled to 1 replicas
	I1208 18:11:22.907301  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 18:11:22.907468  344702 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1208 18:11:22.907505  344702 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:11:22.911780  344702 addons.go:231] Setting addon default-storageclass=true in "addons-766826"
	I1208 18:11:22.911859  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.912354  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.920243  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 18:11:22.920268  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 18:11:22.912577  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 18:11:22.920330  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.915605  344702 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-766826"
	I1208 18:11:22.922128  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 18:11:22.922331  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.925857  344702 out.go:177] * Verifying Kubernetes components...
	I1208 18:11:22.930884  344702 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1208 18:11:22.931330  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.932299  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.933059  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1208 18:11:22.933937  344702 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1208 18:11:22.937390  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:11:22.940223  344702 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 18:11:22.942508  344702 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1208 18:11:22.948187  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 18:11:22.948277  344702 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1208 18:11:22.949706  344702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:11:22.950325  344702 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1208 18:11:22.950340  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 18:11:22.951785  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1208 18:11:22.951858  344702 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1208 18:11:22.953570  344702 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 18:11:22.953587  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 18:11:22.953648  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.954689  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:22.955196  344702 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 18:11:22.955484  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:22.955650  344702 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:11:22.955661  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1208 18:11:22.957108  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.961525  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1208 18:11:22.961629  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 18:11:22.961708  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 18:11:22.961740  344702 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1208 18:11:22.961829  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 18:11:22.961914  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.963681  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 18:11:22.963695  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 18:11:22.963895  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1208 18:11:22.963907  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1208 18:11:22.963934  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:22.963958  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.963962  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.966934  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:22.965495  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.965569  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.965892  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.968544  344702 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 18:11:22.976603  344702 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 18:11:22.975604  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1208 18:11:22.979651  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 18:11:22.979840  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.982618  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 18:11:22.982745  344702 out.go:177]   - Using image docker.io/busybox:stable
	I1208 18:11:22.986355  344702 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 18:11:22.986377  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 18:11:22.986472  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.984324  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 18:11:22.991675  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 18:11:22.991758  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.994128  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.001506  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.007329  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.013343  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.016481  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.016540  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.017616  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.024344  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.024567  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 18:11:23.025424  344702 node_ready.go:35] waiting up to 6m0s for node "addons-766826" to be "Ready" ...
	I1208 18:11:23.028317  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.037063  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.043863  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	W1208 18:11:23.050647  344702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 18:11:23.050693  344702 retry.go:31] will retry after 324.404846ms: ssh: handshake failed: EOF
	I1208 18:11:23.319564  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:11:23.320203  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 18:11:23.320263  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 18:11:23.335703  344702 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 18:11:23.335732  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 18:11:23.429155  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 18:11:23.435135  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 18:11:23.522477  344702 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 18:11:23.522506  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 18:11:23.523104  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 18:11:23.528751  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 18:11:23.528831  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 18:11:23.535430  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 18:11:23.620106  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 18:11:23.620149  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 18:11:23.620453  344702 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1208 18:11:23.620483  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1208 18:11:23.628630  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1208 18:11:23.628655  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1208 18:11:23.629977  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 18:11:23.634436  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 18:11:23.634475  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 18:11:23.635859  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 18:11:23.821138  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 18:11:23.821169  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 18:11:23.822911  344702 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1208 18:11:23.822941  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1208 18:11:23.835503  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 18:11:23.835539  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 18:11:23.923254  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 18:11:23.923361  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 18:11:23.930552  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1208 18:11:23.930582  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1208 18:11:23.937900  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 18:11:24.031523  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 18:11:24.119962  344702 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1208 18:11:24.120040  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1208 18:11:24.120580  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 18:11:24.120657  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 18:11:24.230034  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 18:11:24.230060  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 18:11:24.339019  344702 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1208 18:11:24.339055  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1208 18:11:24.526693  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 18:11:24.526729  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 18:11:24.535045  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1208 18:11:24.820398  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 18:11:24.820501  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 18:11:24.833412  344702 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1208 18:11:24.833447  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1208 18:11:24.926486  344702 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.901879474s)
	I1208 18:11:24.926641  344702 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1208 18:11:24.930530  344702 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:24.930559  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 18:11:25.035071  344702 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1208 18:11:25.035106  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1208 18:11:25.129836  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:25.140356  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 18:11:25.140389  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 18:11:25.322023  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:25.434509  344702 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1208 18:11:25.434598  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1208 18:11:25.620440  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 18:11:25.620480  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 18:11:25.928996  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 18:11:25.929026  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 18:11:26.031827  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1208 18:11:26.437065  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 18:11:26.437156  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 18:11:26.822492  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 18:11:26.822573  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 18:11:27.127587  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 18:11:27.127675  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 18:11:27.335536  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 18:11:27.633935  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:28.139438  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.819756101s)
	I1208 18:11:28.139560  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.710357431s)
	I1208 18:11:28.430271  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.995096897s)
	I1208 18:11:29.523725  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.000533906s)
	I1208 18:11:29.523771  344702 addons.go:467] Verifying addon ingress=true in "addons-766826"
	I1208 18:11:29.523777  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.893731945s)
	I1208 18:11:29.523842  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.887954205s)
	I1208 18:11:29.523865  344702 addons.go:467] Verifying addon registry=true in "addons-766826"
	I1208 18:11:29.523721  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.988246134s)
	I1208 18:11:29.526820  344702 out.go:177] * Verifying ingress addon...
	I1208 18:11:29.523956  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.586028363s)
	I1208 18:11:29.524021  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.492419581s)
	I1208 18:11:29.524049  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.988921383s)
	I1208 18:11:29.524154  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.202074476s)
	I1208 18:11:29.524214  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.492296865s)
	I1208 18:11:29.528428  344702 out.go:177] * Verifying registry addon...
	I1208 18:11:29.528474  344702 addons.go:467] Verifying addon metrics-server=true in "addons-766826"
	W1208 18:11:29.528505  344702 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 18:11:29.529853  344702 retry.go:31] will retry after 161.001524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 18:11:29.529266  344702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 18:11:29.530687  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 18:11:29.535050  344702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 18:11:29.535074  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:29.535985  344702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 18:11:29.536005  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:29.538323  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:29.538839  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:29.690984  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:29.749396  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 18:11:29.749471  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:29.768835  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:29.938648  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 18:11:29.957921  344702 addons.go:231] Setting addon gcp-auth=true in "addons-766826"
	I1208 18:11:29.957995  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:29.958666  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:29.978692  344702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 18:11:29.978751  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:29.994417  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:30.045584  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:30.046183  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:30.046615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:30.420250  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.084602629s)
	I1208 18:11:30.420303  344702 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-766826"
	I1208 18:11:30.422066  344702 out.go:177] * Verifying csi-hostpath-driver addon...
	I1208 18:11:30.424881  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 18:11:30.428628  344702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 18:11:30.428653  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:30.432552  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:30.543342  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:30.543851  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:30.802386  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.111357938s)
	I1208 18:11:30.805455  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:30.807196  344702 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1208 18:11:30.808694  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 18:11:30.808712  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 18:11:30.825210  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 18:11:30.825244  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 18:11:30.841279  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 18:11:30.841301  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1208 18:11:30.857041  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 18:11:30.937184  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.043681  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:31.044908  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:31.437771  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.543044  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:31.544240  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:31.925138  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.068047577s)
	I1208 18:11:31.926062  344702 addons.go:467] Verifying addon gcp-auth=true in "addons-766826"
	I1208 18:11:31.928934  344702 out.go:177] * Verifying gcp-auth addon...
	I1208 18:11:31.931333  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 18:11:31.934129  344702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 18:11:31.934153  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:31.938866  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.943240  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:32.042767  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:32.043034  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:32.437925  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:32.447324  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:32.620753  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:32.622077  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:32.623276  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:32.937023  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:32.947061  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:33.043370  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:33.045841  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:33.438129  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:33.446764  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:33.542955  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:33.544645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:33.936862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:33.947015  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:34.043224  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:34.043397  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:34.437577  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:34.447561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:34.543128  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:34.543386  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:34.936851  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:34.946292  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:35.043327  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:35.043381  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:35.044983  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:35.437473  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:35.447801  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:35.542563  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:35.542802  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:35.937356  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:35.946797  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:36.042394  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:36.042805  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:36.437280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:36.446750  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:36.543040  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:36.543374  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:36.937217  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:36.946869  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:37.042396  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:37.044297  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:37.437477  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:37.446967  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:37.542624  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:37.542697  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:37.544093  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:37.937228  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:37.946541  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:38.041987  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:38.042956  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:38.437517  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:38.446904  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:38.542378  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:38.542621  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:38.936763  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:38.946264  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:39.042733  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:39.043055  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:39.436344  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:39.446658  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:39.542395  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:39.542994  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:39.937109  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:39.946554  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:40.042167  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:40.042756  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:40.044508  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:40.436659  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:40.446231  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:40.543027  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:40.543322  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:40.936645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:40.947045  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:41.042625  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:41.042900  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:41.437561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:41.446771  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:41.542323  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:41.542375  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:41.937325  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:41.946645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:42.042231  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:42.043242  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:42.436552  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:42.447137  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:42.542766  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:42.542884  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:42.544381  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:42.936365  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:42.946829  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:43.042115  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:43.042433  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:43.437042  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:43.446272  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:43.542752  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:43.543229  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:43.936546  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:43.947191  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:44.042889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:44.043609  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:44.436995  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:44.446173  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:44.542655  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:44.543004  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:44.937018  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:44.946301  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:45.042731  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:45.043385  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:45.044605  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:45.436926  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:45.448054  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:45.542363  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:45.542770  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:45.936960  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:45.946348  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:46.043014  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:46.043476  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:46.436612  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:46.446853  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:46.542110  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:46.542272  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:46.937168  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:46.946238  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:47.043019  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:47.043032  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:47.437600  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:47.446851  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:47.542437  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:47.542729  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:47.543965  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:47.936877  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:47.946107  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:48.042364  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:48.042655  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:48.436871  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:48.446474  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:48.542908  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:48.543246  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:48.936410  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:48.947265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:49.043336  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:49.043552  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:49.437067  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:49.446610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:49.541881  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:49.542680  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:49.544141  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:49.937219  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:49.946637  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:50.042391  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:50.043104  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:50.436770  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:50.447044  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:50.542633  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:50.542908  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:50.936975  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:50.946235  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:51.042570  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:51.043024  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:51.437297  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:51.446780  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:51.542075  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:51.542358  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:51.937200  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:51.946656  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:52.042170  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:52.042873  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:52.044433  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:52.437507  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:52.446785  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:52.541968  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:52.542222  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:52.936785  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:52.946062  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:53.042491  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:53.042758  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:53.437171  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:53.446414  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:53.544668  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:53.544791  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:53.938403  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:53.947316  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:54.042741  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:54.043234  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:54.044468  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:54.436395  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:54.446791  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:54.542668  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:54.542971  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:54.937328  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:54.946813  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:55.042631  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:55.043227  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:55.436512  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:55.446548  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:55.542837  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:55.542999  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:55.936482  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:55.946790  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:56.042578  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:56.042627  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:56.436585  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:56.446915  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:56.542237  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:56.542584  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:56.544181  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:56.937289  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:56.946761  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:57.042104  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:57.042965  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:57.436707  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:57.445943  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:57.542519  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:57.542785  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:57.938832  344702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 18:11:57.938862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:57.946618  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:58.043176  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:58.043525  344702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 18:11:58.043549  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:58.044608  344702 node_ready.go:49] node "addons-766826" has status "Ready":"True"
	I1208 18:11:58.044630  344702 node_ready.go:38] duration metric: took 35.019178359s waiting for node "addons-766826" to be "Ready" ...
	I1208 18:11:58.044639  344702 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:11:58.053757  344702 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:58.437828  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:58.448202  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:58.543888  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:58.544561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:58.940432  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:58.947135  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:59.043026  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:59.043192  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:59.438650  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:59.446693  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:59.542992  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:59.544441  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:59.574822  344702 pod_ready.go:92] pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.574853  344702 pod_ready.go:81] duration metric: took 1.521068599s waiting for pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.574881  344702 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.579929  344702 pod_ready.go:92] pod "etcd-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.579951  344702 pod_ready.go:81] duration metric: took 5.060841ms waiting for pod "etcd-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.579966  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.621520  344702 pod_ready.go:92] pod "kube-apiserver-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.621545  344702 pod_ready.go:81] duration metric: took 41.570164ms waiting for pod "kube-apiserver-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.621558  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.645545  344702 pod_ready.go:92] pod "kube-controller-manager-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.645570  344702 pod_ready.go:81] duration metric: took 24.003196ms waiting for pod "kube-controller-manager-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.645585  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sqqhb" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.938241  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:59.946937  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:00.043522  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:00.043651  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:00.044765  344702 pod_ready.go:92] pod "kube-proxy-sqqhb" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:00.044784  344702 pod_ready.go:81] duration metric: took 399.192062ms waiting for pod "kube-proxy-sqqhb" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.044796  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.439650  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:00.445728  344702 pod_ready.go:92] pod "kube-scheduler-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:00.445752  344702 pod_ready.go:81] duration metric: took 400.948098ms waiting for pod "kube-scheduler-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.445765  344702 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.446969  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:00.544249  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:00.544779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:00.939505  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:00.947615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:01.043358  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:01.044112  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:01.438098  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:01.446773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:01.543322  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:01.544030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:01.939228  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:01.947685  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:02.043094  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:02.044963  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:02.438965  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:02.447380  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:02.543264  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:02.543756  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:02.752935  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:02.938867  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:02.947590  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:03.043647  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:03.044356  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:03.439928  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:03.447004  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:03.543472  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:03.543593  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:03.939241  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:03.946764  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:04.043438  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:04.043587  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:04.439163  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:04.447638  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:04.544629  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:04.544703  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:04.753562  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:04.939910  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:04.947210  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:05.043587  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:05.043663  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:05.439623  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:05.446894  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:05.542981  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:05.546325  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:05.938593  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:05.947831  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:06.042786  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:06.043814  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:06.438046  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:06.446906  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:06.543082  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:06.543232  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:06.938156  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:06.947229  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:07.042884  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:07.043007  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:07.252014  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:07.438711  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:07.446669  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:07.543112  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:07.543846  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:07.938208  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:07.947367  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:08.044096  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:08.044156  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:08.438151  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:08.447314  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:08.543169  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:08.543182  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:08.937942  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:08.947244  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:09.043666  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:09.043898  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:09.252883  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:09.437366  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:09.446303  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:09.543327  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:09.543635  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:09.939007  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:09.946631  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:10.043897  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:10.044041  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:10.441168  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:10.447071  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:10.542867  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:10.543073  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:10.937675  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:10.946954  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:11.043299  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:11.043349  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:11.438616  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:11.447507  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:11.544194  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:11.544456  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:11.824989  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:11.941355  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:11.952050  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:12.044488  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:12.045530  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:12.439022  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:12.447090  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:12.543389  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:12.543460  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:12.940962  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:12.947691  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:13.043971  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:13.043973  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:13.439129  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:13.447515  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:13.543442  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:13.543898  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:13.937807  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:13.947053  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:14.043402  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:14.043814  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:14.252658  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:14.439179  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:14.446008  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:14.542849  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:14.542959  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:14.938525  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:14.947637  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:15.043804  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:15.044385  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:15.438424  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:15.448030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:15.543094  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:15.543139  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:15.938888  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:15.946575  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:16.043816  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:16.044111  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:16.438150  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:16.446540  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:16.543701  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:16.543704  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:16.751742  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:16.937434  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:16.946832  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:17.043208  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:17.044182  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:17.437773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:17.446929  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:17.542194  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:17.543270  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:17.938658  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:17.946474  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:18.043776  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:18.043889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:18.438093  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:18.447302  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:18.543362  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:18.543400  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:18.751980  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:18.938393  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:18.946914  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:19.045091  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:19.045269  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:19.438600  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:19.447751  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:19.542844  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:19.544813  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:19.939571  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:19.947664  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:20.043293  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:20.044252  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:20.438773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:20.446675  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:20.543808  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:20.543842  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:20.756380  344702 pod_ready.go:92] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:20.756416  344702 pod_ready.go:81] duration metric: took 20.310639375s waiting for pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:20.756432  344702 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:20.938280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:20.946672  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:21.043464  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:21.043718  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:21.439306  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:21.447627  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:21.543679  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:21.543757  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:21.938889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:21.946952  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:22.042735  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:22.043531  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:22.438103  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:22.447076  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:22.543092  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:22.543180  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:22.841026  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:22.938501  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:22.946030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:23.042883  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:23.042952  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:23.437710  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:23.446497  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:23.543152  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:23.543477  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:23.939254  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:23.947430  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:24.043865  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:24.043886  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:24.437796  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:24.447111  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:24.543730  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:24.545067  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:24.937870  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:24.946888  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:25.043092  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:25.043473  344702 kapi.go:107] duration metric: took 55.51278648s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 18:12:25.339704  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:25.437683  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:25.446654  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:25.541972  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:25.938957  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:25.946613  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:26.043417  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:26.440779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:26.447258  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:26.543799  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:26.939444  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:26.947191  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:27.043881  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:27.340561  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:27.439772  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:27.446972  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:27.543482  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:27.938332  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:27.947610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:28.043475  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:28.440483  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:28.447521  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:28.544074  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:28.939424  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:28.947034  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:29.044600  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:29.439231  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:29.446780  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:29.542932  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:29.840035  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:29.938025  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:29.947344  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:30.043774  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:30.438187  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:30.447382  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:30.543186  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:30.937989  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:30.946862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:31.042735  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:31.438033  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:31.446592  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:31.543906  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:31.926122  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:31.938725  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.026017  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:32.044559  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:32.340748  344702 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:32.340778  344702 pod_ready.go:81] duration metric: took 11.584337358s waiting for pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:32.340801  344702 pod_ready.go:38] duration metric: took 34.296151939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:12:32.340827  344702 api_server.go:52] waiting for apiserver process to appear ...
	I1208 18:12:32.340867  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:32.340925  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:32.439012  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.447610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:32.454247  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:32.454287  344702 cri.go:89] found id: ""
	I1208 18:12:32.454300  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:32.454364  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.522663  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:32.522743  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:32.544401  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:32.734753  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:32.734782  344702 cri.go:89] found id: ""
	I1208 18:12:32.734793  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:32.734872  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.747429  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:32.747504  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:32.939163  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:32.939193  344702 cri.go:89] found id: ""
	I1208 18:12:32.939203  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:32.939260  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.943021  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:32.943092  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:32.951149  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.951670  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.046091  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:33.144957  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:33.145044  344702 cri.go:89] found id: ""
	I1208 18:12:33.145058  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:33.145138  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.149124  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:33.149192  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:33.321529  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:33.321556  344702 cri.go:89] found id: ""
	I1208 18:12:33.321567  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:33.321620  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.325674  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:33.325743  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:33.429162  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:33.429194  344702 cri.go:89] found id: ""
	I1208 18:12:33.429206  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:33.429266  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.433226  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:33.433311  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:33.441336  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:33.447395  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.535052  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:33.535083  344702 cri.go:89] found id: ""
	I1208 18:12:33.535095  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:33.535154  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.539543  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:33.539580  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:33.544423  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:33.657142  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:33.657194  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:33.754798  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:33.754839  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:33.860488  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:33.860527  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:33.938312  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:33.947661  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.951488  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:33.951516  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:34.042945  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:34.101158  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:34.101200  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:34.152441  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:34.152477  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:34.183203  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:34.183247  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:34.347816  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:34.347850  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:34.382731  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:34.382761  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:34.438699  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:34.447019  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:34.487396  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:34.487436  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 18:12:34.543183  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1208 18:12:34.573390  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:34.573569  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:34.608206  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:34.608250  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:34.670686  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:34.670720  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:34.670843  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:34.670860  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:34.670870  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:34.670884  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:34.670901  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:12:34.937755  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:34.946674  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:35.042214  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:35.438645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:35.446614  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:35.543281  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:35.938779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:35.946828  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:36.042624  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:36.439052  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:36.447187  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:36.543270  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:36.937615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:36.947194  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:37.043141  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:37.437450  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:37.446337  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:37.543009  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:37.939736  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:37.947271  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:38.042951  344702 kapi.go:107] duration metric: took 1m8.513674023s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 18:12:38.437982  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:38.447197  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:38.938398  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:38.946201  344702 kapi.go:107] duration metric: took 1m7.01486797s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 18:12:38.948237  344702 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-766826 cluster.
	I1208 18:12:38.949806  344702 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 18:12:38.951335  344702 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 18:12:39.438265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:39.938280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:40.438104  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:40.940896  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:41.438779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:41.938639  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:42.437583  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:42.939922  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:43.441903  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:43.938389  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.438265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.672353  344702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 18:12:44.687134  344702 api_server.go:72] duration metric: took 1m21.774518861s to wait for apiserver process to appear ...
	I1208 18:12:44.687161  344702 api_server.go:88] waiting for apiserver healthz status ...
	I1208 18:12:44.687201  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:44.687259  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:44.726545  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:44.726573  344702 cri.go:89] found id: ""
	I1208 18:12:44.726585  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:44.726634  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.729967  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:44.730034  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:44.766801  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:44.766826  344702 cri.go:89] found id: ""
	I1208 18:12:44.766836  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:44.766894  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.770317  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:44.770392  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:44.837785  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:44.837808  344702 cri.go:89] found id: ""
	I1208 18:12:44.837816  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:44.837869  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.841311  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:44.841379  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:44.874200  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:44.874235  344702 cri.go:89] found id: ""
	I1208 18:12:44.874246  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:44.874311  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.877648  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:44.877720  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:44.938698  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.951460  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:44.951480  344702 cri.go:89] found id: ""
	I1208 18:12:44.951488  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:44.951537  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.955253  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:44.955317  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:45.035041  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:45.035071  344702 cri.go:89] found id: ""
	I1208 18:12:45.035082  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:45.035131  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:45.038532  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:45.038602  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:45.079565  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:45.079592  344702 cri.go:89] found id: ""
	I1208 18:12:45.079601  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:45.079656  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:45.083042  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:45.083061  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:45.184280  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:45.184323  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:45.233023  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:45.233054  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:45.261532  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:45.261566  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:45.340782  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:45.340825  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:45.384391  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:45.384428  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:45.437981  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:45.459037  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:45.459080  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:45.534202  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:45.534239  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:45.571208  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:45.571237  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:45.688748  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:45.688787  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:45.740228  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:45.740261  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1208 18:12:45.788459  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:45.788631  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:45.829017  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:45.829063  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:45.937773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:45.961327  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:45.961360  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:45.961425  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:45.961441  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:45.961457  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:45.961471  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:45.961483  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:12:46.438073  344702 kapi.go:107] duration metric: took 1m16.013187384s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 18:12:46.440234  344702 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, inspektor-gadget, cloud-spanner, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1208 18:12:46.441740  344702 addons.go:502] enable addons completed in 1m23.584281532s: enabled=[storage-provisioner ingress-dns storage-provisioner-rancher nvidia-device-plugin inspektor-gadget cloud-spanner helm-tiller metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1208 18:12:55.962705  344702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1208 18:12:55.968101  344702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1208 18:12:55.969183  344702 api_server.go:141] control plane version: v1.28.4
	I1208 18:12:55.969207  344702 api_server.go:131] duration metric: took 11.282039542s to wait for apiserver health ...
	I1208 18:12:55.969216  344702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 18:12:55.969239  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:55.969287  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:56.002589  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:56.002609  344702 cri.go:89] found id: ""
	I1208 18:12:56.002618  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:56.002669  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.005907  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:56.005986  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:56.039317  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:56.039339  344702 cri.go:89] found id: ""
	I1208 18:12:56.039347  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:56.039401  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.042640  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:56.042693  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:56.075357  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:56.075385  344702 cri.go:89] found id: ""
	I1208 18:12:56.075399  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:56.075457  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.078654  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:56.078767  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:56.110980  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:56.111001  344702 cri.go:89] found id: ""
	I1208 18:12:56.111009  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:56.111057  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.114289  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:56.114343  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:56.148911  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:56.148933  344702 cri.go:89] found id: ""
	I1208 18:12:56.148941  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:56.148981  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.152447  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:56.152505  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:56.185509  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:56.185538  344702 cri.go:89] found id: ""
	I1208 18:12:56.185548  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:56.185598  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.188968  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:56.189043  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:56.222226  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:56.222256  344702 cri.go:89] found id: ""
	I1208 18:12:56.222275  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:56.222329  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.225689  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:56.225719  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:56.263810  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:56.263841  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:56.302691  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:56.302723  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:56.345691  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:56.345725  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1208 18:12:56.391606  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:56.391782  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:56.425899  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:56.425937  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:56.454115  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:56.454153  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:56.495038  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:56.495075  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:56.553257  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:56.553295  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:56.586270  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:56.586303  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:56.660408  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:56.660444  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:56.757232  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:56.757263  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:56.800233  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:56.800265  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:56.833966  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:56.833992  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:56.834062  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:56.834076  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:56.834086  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:56.834099  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:56.834112  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:13:06.844971  344702 system_pods.go:59] 19 kube-system pods found
	I1208 18:13:06.845002  344702 system_pods.go:61] "coredns-5dd5756b68-gr7cp" [d095f129-9a95-4ac0-bb7a-12d2353223cd] Running
	I1208 18:13:06.845007  344702 system_pods.go:61] "csi-hostpath-attacher-0" [19f35feb-6448-4e6f-b49b-c670972cc314] Running
	I1208 18:13:06.845011  344702 system_pods.go:61] "csi-hostpath-resizer-0" [970d12ad-fd8f-4488-bca9-9f3d9e3bcb98] Running
	I1208 18:13:06.845015  344702 system_pods.go:61] "csi-hostpathplugin-nffnm" [e337b550-055c-424a-af38-ba18f2f436de] Running
	I1208 18:13:06.845019  344702 system_pods.go:61] "etcd-addons-766826" [d4666697-78e7-4a9b-9317-9511a0005ade] Running
	I1208 18:13:06.845022  344702 system_pods.go:61] "kindnet-bdq5w" [e1139c0f-a09a-4fce-9e52-95a17bc4b151] Running
	I1208 18:13:06.845026  344702 system_pods.go:61] "kube-apiserver-addons-766826" [35c9f38c-7943-42b2-acc3-4819e39b15a1] Running
	I1208 18:13:06.845030  344702 system_pods.go:61] "kube-controller-manager-addons-766826" [d6aa247e-faad-4782-8377-f8d2255ea109] Running
	I1208 18:13:06.845037  344702 system_pods.go:61] "kube-ingress-dns-minikube" [4dbe76f2-999f-4e8a-beac-8c4693152b8f] Running
	I1208 18:13:06.845040  344702 system_pods.go:61] "kube-proxy-sqqhb" [b59bf415-faa1-43be-8604-f2e271f4257a] Running
	I1208 18:13:06.845044  344702 system_pods.go:61] "kube-scheduler-addons-766826" [1d43bdbc-d587-443a-a9d0-9ec51334900a] Running
	I1208 18:13:06.845049  344702 system_pods.go:61] "metrics-server-7c66d45ddc-zrxqf" [96be6ea9-f7ed-447e-96f0-2de2852c5689] Running
	I1208 18:13:06.845053  344702 system_pods.go:61] "nvidia-device-plugin-daemonset-2vjv7" [fbd353d3-71e8-4b51-9170-9716493afe0b] Running
	I1208 18:13:06.845057  344702 system_pods.go:61] "registry-n29ff" [51d60be4-1fcd-4243-a9f5-b01f0c18e985] Running
	I1208 18:13:06.845063  344702 system_pods.go:61] "registry-proxy-pg8rp" [831df691-d6e7-47e4-81c5-ec68788fcdb4] Running
	I1208 18:13:06.845067  344702 system_pods.go:61] "snapshot-controller-58dbcc7b99-dnszh" [24781340-e2e3-49b0-815a-7d325d7e1212] Running
	I1208 18:13:06.845073  344702 system_pods.go:61] "snapshot-controller-58dbcc7b99-f7f7s" [fd8ba136-b4ca-4eb4-a724-daa214f987ce] Running
	I1208 18:13:06.845077  344702 system_pods.go:61] "storage-provisioner" [797fe11e-ddc2-494c-b345-9391a39ae877] Running
	I1208 18:13:06.845082  344702 system_pods.go:61] "tiller-deploy-7b677967b9-lf6zk" [bb5789f0-c460-44b1-8cef-9b34b3892cf5] Running
	I1208 18:13:06.845088  344702 system_pods.go:74] duration metric: took 10.875866123s to wait for pod list to return data ...
	I1208 18:13:06.845099  344702 default_sa.go:34] waiting for default service account to be created ...
	I1208 18:13:06.847373  344702 default_sa.go:45] found service account: "default"
	I1208 18:13:06.847400  344702 default_sa.go:55] duration metric: took 2.294923ms for default service account to be created ...
	I1208 18:13:06.847408  344702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 18:13:06.856731  344702 system_pods.go:86] 19 kube-system pods found
	I1208 18:13:06.856759  344702 system_pods.go:89] "coredns-5dd5756b68-gr7cp" [d095f129-9a95-4ac0-bb7a-12d2353223cd] Running
	I1208 18:13:06.856765  344702 system_pods.go:89] "csi-hostpath-attacher-0" [19f35feb-6448-4e6f-b49b-c670972cc314] Running
	I1208 18:13:06.856768  344702 system_pods.go:89] "csi-hostpath-resizer-0" [970d12ad-fd8f-4488-bca9-9f3d9e3bcb98] Running
	I1208 18:13:06.856772  344702 system_pods.go:89] "csi-hostpathplugin-nffnm" [e337b550-055c-424a-af38-ba18f2f436de] Running
	I1208 18:13:06.856776  344702 system_pods.go:89] "etcd-addons-766826" [d4666697-78e7-4a9b-9317-9511a0005ade] Running
	I1208 18:13:06.856782  344702 system_pods.go:89] "kindnet-bdq5w" [e1139c0f-a09a-4fce-9e52-95a17bc4b151] Running
	I1208 18:13:06.856786  344702 system_pods.go:89] "kube-apiserver-addons-766826" [35c9f38c-7943-42b2-acc3-4819e39b15a1] Running
	I1208 18:13:06.856790  344702 system_pods.go:89] "kube-controller-manager-addons-766826" [d6aa247e-faad-4782-8377-f8d2255ea109] Running
	I1208 18:13:06.856794  344702 system_pods.go:89] "kube-ingress-dns-minikube" [4dbe76f2-999f-4e8a-beac-8c4693152b8f] Running
	I1208 18:13:06.856798  344702 system_pods.go:89] "kube-proxy-sqqhb" [b59bf415-faa1-43be-8604-f2e271f4257a] Running
	I1208 18:13:06.856802  344702 system_pods.go:89] "kube-scheduler-addons-766826" [1d43bdbc-d587-443a-a9d0-9ec51334900a] Running
	I1208 18:13:06.856806  344702 system_pods.go:89] "metrics-server-7c66d45ddc-zrxqf" [96be6ea9-f7ed-447e-96f0-2de2852c5689] Running
	I1208 18:13:06.856810  344702 system_pods.go:89] "nvidia-device-plugin-daemonset-2vjv7" [fbd353d3-71e8-4b51-9170-9716493afe0b] Running
	I1208 18:13:06.856815  344702 system_pods.go:89] "registry-n29ff" [51d60be4-1fcd-4243-a9f5-b01f0c18e985] Running
	I1208 18:13:06.856818  344702 system_pods.go:89] "registry-proxy-pg8rp" [831df691-d6e7-47e4-81c5-ec68788fcdb4] Running
	I1208 18:13:06.856822  344702 system_pods.go:89] "snapshot-controller-58dbcc7b99-dnszh" [24781340-e2e3-49b0-815a-7d325d7e1212] Running
	I1208 18:13:06.856826  344702 system_pods.go:89] "snapshot-controller-58dbcc7b99-f7f7s" [fd8ba136-b4ca-4eb4-a724-daa214f987ce] Running
	I1208 18:13:06.856830  344702 system_pods.go:89] "storage-provisioner" [797fe11e-ddc2-494c-b345-9391a39ae877] Running
	I1208 18:13:06.856833  344702 system_pods.go:89] "tiller-deploy-7b677967b9-lf6zk" [bb5789f0-c460-44b1-8cef-9b34b3892cf5] Running
	I1208 18:13:06.856839  344702 system_pods.go:126] duration metric: took 9.426847ms to wait for k8s-apps to be running ...
	I1208 18:13:06.856845  344702 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 18:13:06.856890  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:13:06.868404  344702 system_svc.go:56] duration metric: took 11.546991ms WaitForService to wait for kubelet.
	I1208 18:13:06.868434  344702 kubeadm.go:581] duration metric: took 1m43.955826122s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1208 18:13:06.868462  344702 node_conditions.go:102] verifying NodePressure condition ...
	I1208 18:13:06.871556  344702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:13:06.871604  344702 node_conditions.go:123] node cpu capacity is 8
	I1208 18:13:06.871619  344702 node_conditions.go:105] duration metric: took 3.15144ms to run NodePressure ...
	I1208 18:13:06.871630  344702 start.go:228] waiting for startup goroutines ...
	I1208 18:13:06.871641  344702 start.go:233] waiting for cluster config update ...
	I1208 18:13:06.871655  344702 start.go:242] writing updated cluster config ...
	I1208 18:13:06.871895  344702 ssh_runner.go:195] Run: rm -f paused
	I1208 18:13:06.921602  344702 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1208 18:13:06.925500  344702 out.go:177] * Done! kubectl is now configured to use "addons-766826" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.568561554Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=ca3b6280-1a4b-4b7c-9831-1123e640d1a0 name=/runtime.v1.ImageService/PullImage
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.569240133Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=79318ed2-0c19-4be0-b7f0-3b4d71e4cdf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.570148584Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=79318ed2-0c19-4be0-b7f0-3b4d71e4cdf5 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.571056108Z" level=info msg="Creating container: default/hello-world-app-5d77478584-25bdl/hello-world-app" id=785af3d2-6577-45ea-87e3-441c105751d0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.571163873Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.645692468Z" level=info msg="Created container e5e2c170d3739fa4616877b418eed5d11ad8b8979d5d40a27d25085294a5281e: default/hello-world-app-5d77478584-25bdl/hello-world-app" id=785af3d2-6577-45ea-87e3-441c105751d0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.646275870Z" level=info msg="Starting container: e5e2c170d3739fa4616877b418eed5d11ad8b8979d5d40a27d25085294a5281e" id=007e743e-1ec2-468e-b08a-fb1b026b6c0b name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 18:15:50 addons-766826 crio[949]: time="2023-12-08 18:15:50.654794428Z" level=info msg="Started container" PID=10980 containerID=e5e2c170d3739fa4616877b418eed5d11ad8b8979d5d40a27d25085294a5281e description=default/hello-world-app-5d77478584-25bdl/hello-world-app id=007e743e-1ec2-468e-b08a-fb1b026b6c0b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c67964e1bc3d618dd2058ef4a0cf2df4e8e4ac36758d29857d39f6ef42e2e8ca
	Dec 08 18:15:51 addons-766826 crio[949]: time="2023-12-08 18:15:51.535965768Z" level=info msg="Removing container: 15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c" id=c16d2f7f-e4b3-4157-8e73-bcd9c7fd4880 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 18:15:51 addons-766826 crio[949]: time="2023-12-08 18:15:51.551608290Z" level=info msg="Removed container 15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=c16d2f7f-e4b3-4157-8e73-bcd9c7fd4880 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 18:15:53 addons-766826 crio[949]: time="2023-12-08 18:15:53.122423211Z" level=info msg="Stopping container: 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155 (timeout: 2s)" id=303ea9d1-02a1-417c-9e66-3cbccc545fef name=/runtime.v1.RuntimeService/StopContainer
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.130789445Z" level=warning msg="Stopping container 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=303ea9d1-02a1-417c-9e66-3cbccc545fef name=/runtime.v1.RuntimeService/StopContainer
	Dec 08 18:15:55 addons-766826 conmon[5582]: conmon 0c195e40e848bae4396e <ninfo>: container 5594 exited with status 137
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.274605460Z" level=info msg="Stopped container 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155: ingress-nginx/ingress-nginx-controller-7c6974c4d8-nzbhz/controller" id=303ea9d1-02a1-417c-9e66-3cbccc545fef name=/runtime.v1.RuntimeService/StopContainer
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.275173910Z" level=info msg="Stopping pod sandbox: ba791fdc4560452227de49b6f49e2873f8c7bf130f7b27e3823ee4180aa41fa7" id=8068efe1-18c7-4a0c-ab23-6ed5f2742b27 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.278993749Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-V2Q4YASSIJKE3BQ6 - [0:0]\n:KUBE-HP-3HD3L6CMLFOOLRSZ - [0:0]\n-X KUBE-HP-3HD3L6CMLFOOLRSZ\n-X KUBE-HP-V2Q4YASSIJKE3BQ6\nCOMMIT\n"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.280452865Z" level=info msg="Closing host port tcp:80"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.280506042Z" level=info msg="Closing host port tcp:443"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.281787297Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.281806647Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.281935468Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-nzbhz Namespace:ingress-nginx ID:ba791fdc4560452227de49b6f49e2873f8c7bf130f7b27e3823ee4180aa41fa7 UID:86132242-3460-406f-9276-ad5d62038cd2 NetNS:/var/run/netns/a74a06e8-6288-4e49-91bc-b1fa4c6a030b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.282051563Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-nzbhz from CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.324019543Z" level=info msg="Stopped pod sandbox: ba791fdc4560452227de49b6f49e2873f8c7bf130f7b27e3823ee4180aa41fa7" id=8068efe1-18c7-4a0c-ab23-6ed5f2742b27 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.550803424Z" level=info msg="Removing container: 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155" id=33901fe8-8ba7-464f-b504-ae90519f18db name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 18:15:55 addons-766826 crio[949]: time="2023-12-08 18:15:55.567575159Z" level=info msg="Removed container 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155: ingress-nginx/ingress-nginx-controller-7c6974c4d8-nzbhz/controller" id=33901fe8-8ba7-464f-b504-ae90519f18db name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e5e2c170d3739       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      9 seconds ago       Running             hello-world-app           0                   c67964e1bc3d6       hello-world-app-5d77478584-25bdl
	18d036c4f9af6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce            45 seconds ago      Exited              gadget                    5                   a640bace6f648       gadget-p6mlj
	dfca22643b722       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   fd15fe9e4b117       headlamp-777fd4b855-xwtlg
	c73f0562bf2cc       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   3d1f216fb6c5b       nginx
	d128959929e79       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   f1ecc38efca75       gcp-auth-d4c87556c-rd4hl
	00bed82a83ec8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   4cfa4664d8fcf       ingress-nginx-admission-patch-gh9hw
	9bfb3bf0f01ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   3f19abc9082a3       ingress-nginx-admission-create-4dfwg
	0afc54229499c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   f9a5a8d390c6d       storage-provisioner
	cbd9f355eab53       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   d8245fa45ba4a       coredns-5dd5756b68-gr7cp
	6603f43b0eb58       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   c2310d083a1c1       kindnet-bdq5w
	2c809a9eebc06       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   21aa5178a09dc       kube-proxy-sqqhb
	c631bcea8eada       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   73a541f0f6c09       kube-apiserver-addons-766826
	cd9915ab51276       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   74cc43c534dd2       kube-controller-manager-addons-766826
	4e07b412711cd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   7de1ce2d07c0e       etcd-addons-766826
	6499d11f12dc1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   c8dfeda773c13       kube-scheduler-addons-766826
	
	* 
	* ==> coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] <==
	* [INFO] 10.244.0.16:41227 - 24001 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090011s
	[INFO] 10.244.0.16:54280 - 34011 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004204961s
	[INFO] 10.244.0.16:54280 - 5340 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00494698s
	[INFO] 10.244.0.16:56475 - 22769 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005108229s
	[INFO] 10.244.0.16:56475 - 29436 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006668247s
	[INFO] 10.244.0.16:50687 - 19693 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004716229s
	[INFO] 10.244.0.16:50687 - 29679 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006907537s
	[INFO] 10.244.0.16:51631 - 44046 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096578s
	[INFO] 10.244.0.16:51631 - 32272 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119066s
	[INFO] 10.244.0.20:48230 - 19832 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213524s
	[INFO] 10.244.0.20:56985 - 61725 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168435s
	[INFO] 10.244.0.20:42018 - 12395 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140347s
	[INFO] 10.244.0.20:55815 - 52735 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170507s
	[INFO] 10.244.0.20:46212 - 10641 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114661s
	[INFO] 10.244.0.20:35719 - 51155 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099156s
	[INFO] 10.244.0.20:43892 - 56512 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007038315s
	[INFO] 10.244.0.20:48795 - 56440 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007313715s
	[INFO] 10.244.0.20:54459 - 48778 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006889701s
	[INFO] 10.244.0.20:48816 - 23564 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007918809s
	[INFO] 10.244.0.20:55158 - 26318 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006514256s
	[INFO] 10.244.0.20:48460 - 59145 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007569835s
	[INFO] 10.244.0.20:43378 - 27475 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000637453s
	[INFO] 10.244.0.20:44992 - 5450 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000825741s
	[INFO] 10.244.0.24:39971 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207037s
	[INFO] 10.244.0.24:48044 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157619s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-766826
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-766826
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=addons-766826
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_08T18_11_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-766826
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-766826
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Dec 2023 18:15:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:13:43 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:13:43 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:13:43 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:13:43 +0000   Fri, 08 Dec 2023 18:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-766826
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 37a00760348a48c8ac7cc6f4c06da6dd
	  System UUID:                426938ba-0e3e-4298-85c5-a948711395ac
	  Boot ID:                    fbb3830a-6e88-496f-844f-172e564c45c3
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-25bdl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gadget                      gadget-p6mlj                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  gcp-auth                    gcp-auth-d4c87556c-rd4hl                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  headlamp                    headlamp-777fd4b855-xwtlg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  kube-system                 coredns-5dd5756b68-gr7cp                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m37s
	  kube-system                 etcd-addons-766826                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kindnet-bdq5w                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m37s
	  kube-system                 kube-apiserver-addons-766826             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-addons-766826    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-sqqhb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-addons-766826             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  Starting                 4m50s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m50s  kubelet          Node addons-766826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s  kubelet          Node addons-766826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s  kubelet          Node addons-766826 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-766826 event: Registered Node addons-766826 in Controller
	  Normal  NodeReady                4m3s   kubelet          Node addons-766826 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 ad 2b 90 71 c9 08 06
	[  +0.015014] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 80 af a2 6c d3 08 06
	[  +1.246435] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a e5 8d e2 3a 9d 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5a 55 7e 00 fc 08 06
	[Dec 8 17:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 23 97 95 53 1e 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 80 af a2 6c d3 08 06
	[Dec 8 18:13] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[  +1.003914] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[  +2.015813] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[  +4.195571] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[  +8.187151] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[Dec 8 18:14] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	[ +33.024606] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 1e 26 40 ea 92 10 16 ad 19 f8 1f 3f 08 00
	
	* 
	* ==> etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] <==
	* {"level":"info","ts":"2023-12-08T18:11:05.237549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2023-12-08T18:11:23.732642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.560932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-12-08T18:11:23.732728Z","caller":"traceutil/trace.go:171","msg":"trace[198064214] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:343; }","duration":"103.666302ms","start":"2023-12-08T18:11:23.629047Z","end":"2023-12-08T18:11:23.732713Z","steps":["trace[198064214] 'range keys from in-memory index tree'  (duration: 103.458031ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:23.732891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.583332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-766826\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-12-08T18:11:23.73298Z","caller":"traceutil/trace.go:171","msg":"trace[1989600869] range","detail":"{range_begin:/registry/minions/addons-766826; range_end:; response_count:1; response_revision:343; }","duration":"103.679827ms","start":"2023-12-08T18:11:23.629286Z","end":"2023-12-08T18:11:23.732966Z","steps":["trace[1989600869] 'range keys from in-memory index tree'  (duration: 103.501553ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733344Z","caller":"traceutil/trace.go:171","msg":"trace[256367974] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"103.791983ms","start":"2023-12-08T18:11:23.62954Z","end":"2023-12-08T18:11:23.733332Z","steps":["trace[256367974] 'process raft request'  (duration: 90.781279ms)","trace[256367974] 'compare'  (duration: 12.188541ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:23.733367Z","caller":"traceutil/trace.go:171","msg":"trace[1998711070] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"102.635603ms","start":"2023-12-08T18:11:23.630722Z","end":"2023-12-08T18:11:23.733357Z","steps":["trace[1998711070] 'process raft request'  (duration: 102.490478ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733515Z","caller":"traceutil/trace.go:171","msg":"trace[735671773] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"102.581799ms","start":"2023-12-08T18:11:23.630926Z","end":"2023-12-08T18:11:23.733508Z","steps":["trace[735671773] 'process raft request'  (duration: 102.338818ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733523Z","caller":"traceutil/trace.go:171","msg":"trace[1525709568] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"102.503537ms","start":"2023-12-08T18:11:23.631012Z","end":"2023-12-08T18:11:23.733515Z","steps":["trace[1525709568] 'process raft request'  (duration: 102.290187ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:25.532937Z","caller":"traceutil/trace.go:171","msg":"trace[1960090481] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"109.397036ms","start":"2023-12-08T18:11:25.423521Z","end":"2023-12-08T18:11:25.532918Z","steps":["trace[1960090481] 'process raft request'  (duration: 97.364173ms)","trace[1960090481] 'compare'  (duration: 11.648341ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:25.820831Z","caller":"traceutil/trace.go:171","msg":"trace[712776121] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"198.312889ms","start":"2023-12-08T18:11:25.622497Z","end":"2023-12-08T18:11:25.820809Z","steps":["trace[712776121] 'process raft request'  (duration: 197.865601ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:25.925416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-08T18:11:25.622445Z","time spent":"302.504195ms","remote":"127.0.0.1:38348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":197,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:212 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:134 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" > >"}
	{"level":"info","ts":"2023-12-08T18:11:26.134734Z","caller":"traceutil/trace.go:171","msg":"trace[1948602233] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"108.392024ms","start":"2023-12-08T18:11:26.026323Z","end":"2023-12-08T18:11:26.134715Z","steps":["trace[1948602233] 'process raft request'  (duration: 107.972381ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:26.240151Z","caller":"traceutil/trace.go:171","msg":"trace[943220108] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"103.852874ms","start":"2023-12-08T18:11:26.136278Z","end":"2023-12-08T18:11:26.240131Z","steps":["trace[943220108] 'process raft request'  (duration: 91.295321ms)","trace[943220108] 'compare'  (duration: 12.4531ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:26.535312Z","caller":"traceutil/trace.go:171","msg":"trace[1119255451] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"190.699483ms","start":"2023-12-08T18:11:26.344595Z","end":"2023-12-08T18:11:26.535295Z","steps":["trace[1119255451] 'process raft request'  (duration: 190.641831ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:26.535606Z","caller":"traceutil/trace.go:171","msg":"trace[173606981] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"191.228132ms","start":"2023-12-08T18:11:26.344366Z","end":"2023-12-08T18:11:26.535594Z","steps":["trace[173606981] 'process raft request'  (duration: 186.757384ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:28.037689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.076715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregated-metrics-reader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-08T18:11:28.037762Z","caller":"traceutil/trace.go:171","msg":"trace[905353659] range","detail":"{range_begin:/registry/clusterroles/system:aggregated-metrics-reader; range_end:; response_count:0; response_revision:473; }","duration":"100.157924ms","start":"2023-12-08T18:11:27.93759Z","end":"2023-12-08T18:11:28.037748Z","steps":["trace[905353659] 'agreement among raft nodes before linearized reading'  (duration: 100.058209ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:28.129125Z","caller":"traceutil/trace.go:171","msg":"trace[1743088610] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"105.817797ms","start":"2023-12-08T18:11:28.023279Z","end":"2023-12-08T18:11:28.129097Z","steps":["trace[1743088610] 'process raft request'  (duration: 102.895289ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:28.12988Z","caller":"traceutil/trace.go:171","msg":"trace[554597905] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"103.720153ms","start":"2023-12-08T18:11:28.026138Z","end":"2023-12-08T18:11:28.129858Z","steps":["trace[554597905] 'process raft request'  (duration: 102.489608ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:28.130318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.606337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4998"}
	{"level":"info","ts":"2023-12-08T18:11:28.139411Z","caller":"traceutil/trace.go:171","msg":"trace[488964075] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:480; }","duration":"116.698248ms","start":"2023-12-08T18:11:28.02269Z","end":"2023-12-08T18:11:28.139388Z","steps":["trace[488964075] 'agreement among raft nodes before linearized reading'  (duration: 107.574326ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:12:43.573328Z","caller":"traceutil/trace.go:171","msg":"trace[1020358742] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"129.689246ms","start":"2023-12-08T18:12:43.44361Z","end":"2023-12-08T18:12:43.5733Z","steps":["trace[1020358742] 'process raft request'  (duration: 129.448663ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:12:48.993426Z","caller":"traceutil/trace.go:171","msg":"trace[254635281] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"124.182441ms","start":"2023-12-08T18:12:48.869219Z","end":"2023-12-08T18:12:48.993401Z","steps":["trace[254635281] 'process raft request'  (duration: 61.399105ms)","trace[254635281] 'compare'  (duration: 62.649277ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:14:08.980635Z","caller":"traceutil/trace.go:171","msg":"trace[815562082] transaction","detail":"{read_only:false; response_revision:1689; number_of_response:1; }","duration":"123.796823ms","start":"2023-12-08T18:14:08.856795Z","end":"2023-12-08T18:14:08.980592Z","steps":["trace[815562082] 'process raft request'  (duration: 62.48578ms)","trace[815562082] 'compare'  (duration: 61.198301ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [d128959929e7981c958c96e650660a47cfacb39887e367236929836936acc83f] <==
	* 2023/12/08 18:12:38 GCP Auth Webhook started!
	2023/12/08 18:13:07 Ready to marshal response ...
	2023/12/08 18:13:07 Ready to write response ...
	2023/12/08 18:13:07 Ready to marshal response ...
	2023/12/08 18:13:07 Ready to write response ...
	2023/12/08 18:13:09 Ready to marshal response ...
	2023/12/08 18:13:09 Ready to write response ...
	2023/12/08 18:13:17 Ready to marshal response ...
	2023/12/08 18:13:17 Ready to write response ...
	2023/12/08 18:13:17 Ready to marshal response ...
	2023/12/08 18:13:17 Ready to write response ...
	2023/12/08 18:13:20 Ready to marshal response ...
	2023/12/08 18:13:20 Ready to write response ...
	2023/12/08 18:13:27 Ready to marshal response ...
	2023/12/08 18:13:27 Ready to write response ...
	2023/12/08 18:13:28 Ready to marshal response ...
	2023/12/08 18:13:28 Ready to write response ...
	2023/12/08 18:13:31 Ready to marshal response ...
	2023/12/08 18:13:31 Ready to write response ...
	2023/12/08 18:13:31 Ready to marshal response ...
	2023/12/08 18:13:31 Ready to write response ...
	2023/12/08 18:13:31 Ready to marshal response ...
	2023/12/08 18:13:31 Ready to write response ...
	2023/12/08 18:15:49 Ready to marshal response ...
	2023/12/08 18:15:49 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:16:00 up  1:57,  0 users,  load average: 0.25, 0.84, 0.54
	Linux addons-766826 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] <==
	* I1208 18:13:57.630401       1 main.go:227] handling current node
	I1208 18:14:07.642216       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:07.642238       1 main.go:227] handling current node
	I1208 18:14:17.654427       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:17.654465       1 main.go:227] handling current node
	I1208 18:14:27.658010       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:27.658033       1 main.go:227] handling current node
	I1208 18:14:37.669732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:37.669762       1 main.go:227] handling current node
	I1208 18:14:47.674437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:47.674489       1 main.go:227] handling current node
	I1208 18:14:57.678286       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:14:57.678310       1 main.go:227] handling current node
	I1208 18:15:07.682133       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:07.682157       1 main.go:227] handling current node
	I1208 18:15:17.693846       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:17.693881       1 main.go:227] handling current node
	I1208 18:15:27.697680       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:27.697703       1 main.go:227] handling current node
	I1208 18:15:37.705991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:37.706017       1 main.go:227] handling current node
	I1208 18:15:47.711102       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:47.711128       1 main.go:227] handling current node
	I1208 18:15:57.723441       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:15:57.723468       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] <==
	* I1208 18:13:27.386933       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1208 18:13:27.609121       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.117.13"}
	I1208 18:13:31.530087       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.253.193"}
	E1208 18:13:36.541947       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1208 18:13:46.568407       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.568460       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.575030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.575100       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.629311       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.629395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.636142       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.636258       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.642651       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.642768       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.719465       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.719541       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.731219       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.731390       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 18:13:46.734219       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 18:13:46.734319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1208 18:13:47.636414       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1208 18:13:47.734533       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1208 18:13:47.741986       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1208 18:14:21.835526       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1208 18:15:49.645179       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.43.48"}
	
	* 
	* ==> kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] <==
	* W1208 18:14:32.911344       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:14:32.911376       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1208 18:14:43.477244       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:14:43.477275       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1208 18:15:01.149422       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:15:01.149451       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1208 18:15:04.295392       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:15:04.295422       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1208 18:15:16.604443       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:15:16.604481       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1208 18:15:42.655130       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:15:42.655165       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1208 18:15:49.490339       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1208 18:15:49.502250       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-25bdl"
	I1208 18:15:49.510078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.861151ms"
	I1208 18:15:49.515784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.652071ms"
	I1208 18:15:49.515893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.246µs"
	I1208 18:15:49.516774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="62.248µs"
	I1208 18:15:51.569552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.172277ms"
	I1208 18:15:51.569626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.742µs"
	I1208 18:15:52.095074       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1208 18:15:52.095728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.075µs"
	I1208 18:15:52.099198       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1208 18:15:55.592626       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1208 18:15:55.592660       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] <==
	* I1208 18:11:27.025246       1 server_others.go:69] "Using iptables proxy"
	I1208 18:11:27.245485       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1208 18:11:27.820291       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 18:11:27.823446       1 server_others.go:152] "Using iptables Proxier"
	I1208 18:11:27.823498       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 18:11:27.823509       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 18:11:27.823546       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 18:11:27.823763       1 server.go:846] "Version info" version="v1.28.4"
	I1208 18:11:27.823781       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 18:11:27.827532       1 config.go:188] "Starting service config controller"
	I1208 18:11:27.827707       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 18:11:27.827787       1 config.go:97] "Starting endpoint slice config controller"
	I1208 18:11:27.827822       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 18:11:27.828455       1 config.go:315] "Starting node config controller"
	I1208 18:11:27.832108       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 18:11:27.928562       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1208 18:11:27.928710       1 shared_informer.go:318] Caches are synced for service config
	I1208 18:11:27.933506       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] <==
	* E1208 18:11:07.439148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1208 18:11:07.438816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:11:07.439258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:11:07.438210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1208 18:11:07.439041       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1208 18:11:07.439287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1208 18:11:07.438571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1208 18:11:07.439315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1208 18:11:07.439379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 18:11:07.439397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1208 18:11:07.439474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:11:07.439485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:11:08.316137       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 18:11:08.316187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 18:11:08.339858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1208 18:11:08.339895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1208 18:11:08.406031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:11:08.406060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:11:08.433479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1208 18:11:08.433505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1208 18:11:08.477999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1208 18:11:08.478034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1208 18:11:08.604878       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 18:11:08.604925       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1208 18:11:10.934432       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 08 18:15:49 addons-766826 kubelet[1561]: I1208 18:15:49.641784    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/912352eb-a38c-4f54-ac07-c11a4dddb1c9-gcp-creds\") pod \"hello-world-app-5d77478584-25bdl\" (UID: \"912352eb-a38c-4f54-ac07-c11a4dddb1c9\") " pod="default/hello-world-app-5d77478584-25bdl"
	Dec 08 18:15:49 addons-766826 kubelet[1561]: I1208 18:15:49.641868    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kdw\" (UniqueName: \"kubernetes.io/projected/912352eb-a38c-4f54-ac07-c11a4dddb1c9-kube-api-access-j8kdw\") pod \"hello-world-app-5d77478584-25bdl\" (UID: \"912352eb-a38c-4f54-ac07-c11a4dddb1c9\") " pod="default/hello-world-app-5d77478584-25bdl"
	Dec 08 18:15:49 addons-766826 kubelet[1561]: W1208 18:15:49.919487    1561 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/crio-c67964e1bc3d618dd2058ef4a0cf2df4e8e4ac36758d29857d39f6ef42e2e8ca WatchSource:0}: Error finding container c67964e1bc3d618dd2058ef4a0cf2df4e8e4ac36758d29857d39f6ef42e2e8ca: Status 404 returned error can't find the container with id c67964e1bc3d618dd2058ef4a0cf2df4e8e4ac36758d29857d39f6ef42e2e8ca
	Dec 08 18:15:50 addons-766826 kubelet[1561]: I1208 18:15:50.750272    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79tw8\" (UniqueName: \"kubernetes.io/projected/4dbe76f2-999f-4e8a-beac-8c4693152b8f-kube-api-access-79tw8\") pod \"4dbe76f2-999f-4e8a-beac-8c4693152b8f\" (UID: \"4dbe76f2-999f-4e8a-beac-8c4693152b8f\") "
	Dec 08 18:15:50 addons-766826 kubelet[1561]: I1208 18:15:50.752179    1561 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbe76f2-999f-4e8a-beac-8c4693152b8f-kube-api-access-79tw8" (OuterVolumeSpecName: "kube-api-access-79tw8") pod "4dbe76f2-999f-4e8a-beac-8c4693152b8f" (UID: "4dbe76f2-999f-4e8a-beac-8c4693152b8f"). InnerVolumeSpecName "kube-api-access-79tw8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 08 18:15:50 addons-766826 kubelet[1561]: I1208 18:15:50.851606    1561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-79tw8\" (UniqueName: \"kubernetes.io/projected/4dbe76f2-999f-4e8a-beac-8c4693152b8f-kube-api-access-79tw8\") on node \"addons-766826\" DevicePath \"\""
	Dec 08 18:15:51 addons-766826 kubelet[1561]: I1208 18:15:51.534889    1561 scope.go:117] "RemoveContainer" containerID="15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c"
	Dec 08 18:15:51 addons-766826 kubelet[1561]: I1208 18:15:51.551918    1561 scope.go:117] "RemoveContainer" containerID="15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c"
	Dec 08 18:15:51 addons-766826 kubelet[1561]: E1208 18:15:51.552400    1561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c\": container with ID starting with 15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c not found: ID does not exist" containerID="15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c"
	Dec 08 18:15:51 addons-766826 kubelet[1561]: I1208 18:15:51.552457    1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c"} err="failed to get container status \"15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c\": rpc error: code = NotFound desc = could not find container \"15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c\": container with ID starting with 15cb33eff7a9427eb2907b660862fef06e22c2d471b824a2b8ac9218421e3b2c not found: ID does not exist"
	Dec 08 18:15:51 addons-766826 kubelet[1561]: I1208 18:15:51.563805    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-25bdl" podStartSLOduration=1.9183923950000001 podCreationTimestamp="2023-12-08 18:15:49 +0000 UTC" firstStartedPulling="2023-12-08 18:15:49.923408262 +0000 UTC m=+279.825405861" lastFinishedPulling="2023-12-08 18:15:50.568764527 +0000 UTC m=+280.470762120" observedRunningTime="2023-12-08 18:15:51.563249585 +0000 UTC m=+281.465247189" watchObservedRunningTime="2023-12-08 18:15:51.563748654 +0000 UTC m=+281.465746252"
	Dec 08 18:15:52 addons-766826 kubelet[1561]: I1208 18:15:52.238201    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="00b5b8d7-9fee-4482-a0b1-437da9c712a3" path="/var/lib/kubelet/pods/00b5b8d7-9fee-4482-a0b1-437da9c712a3/volumes"
	Dec 08 18:15:52 addons-766826 kubelet[1561]: I1208 18:15:52.238578    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4dbe76f2-999f-4e8a-beac-8c4693152b8f" path="/var/lib/kubelet/pods/4dbe76f2-999f-4e8a-beac-8c4693152b8f/volumes"
	Dec 08 18:15:52 addons-766826 kubelet[1561]: I1208 18:15:52.238869    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fa166cee-a2fd-4564-af65-daaf708c301b" path="/var/lib/kubelet/pods/fa166cee-a2fd-4564-af65-daaf708c301b/volumes"
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.478725    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86132242-3460-406f-9276-ad5d62038cd2-webhook-cert\") pod \"86132242-3460-406f-9276-ad5d62038cd2\" (UID: \"86132242-3460-406f-9276-ad5d62038cd2\") "
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.478782    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8j76\" (UniqueName: \"kubernetes.io/projected/86132242-3460-406f-9276-ad5d62038cd2-kube-api-access-r8j76\") pod \"86132242-3460-406f-9276-ad5d62038cd2\" (UID: \"86132242-3460-406f-9276-ad5d62038cd2\") "
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.480649    1561 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86132242-3460-406f-9276-ad5d62038cd2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "86132242-3460-406f-9276-ad5d62038cd2" (UID: "86132242-3460-406f-9276-ad5d62038cd2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.480730    1561 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86132242-3460-406f-9276-ad5d62038cd2-kube-api-access-r8j76" (OuterVolumeSpecName: "kube-api-access-r8j76") pod "86132242-3460-406f-9276-ad5d62038cd2" (UID: "86132242-3460-406f-9276-ad5d62038cd2"). InnerVolumeSpecName "kube-api-access-r8j76". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.549765    1561 scope.go:117] "RemoveContainer" containerID="0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155"
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.567834    1561 scope.go:117] "RemoveContainer" containerID="0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155"
	Dec 08 18:15:55 addons-766826 kubelet[1561]: E1208 18:15:55.568258    1561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155\": container with ID starting with 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155 not found: ID does not exist" containerID="0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155"
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.568312    1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155"} err="failed to get container status \"0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155\": rpc error: code = NotFound desc = could not find container \"0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155\": container with ID starting with 0c195e40e848bae4396e1e04d7aba230dae1c5d3da1501ef39a74b10fa9a1155 not found: ID does not exist"
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.579505    1561 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86132242-3460-406f-9276-ad5d62038cd2-webhook-cert\") on node \"addons-766826\" DevicePath \"\""
	Dec 08 18:15:55 addons-766826 kubelet[1561]: I1208 18:15:55.579538    1561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r8j76\" (UniqueName: \"kubernetes.io/projected/86132242-3460-406f-9276-ad5d62038cd2-kube-api-access-r8j76\") on node \"addons-766826\" DevicePath \"\""
	Dec 08 18:15:56 addons-766826 kubelet[1561]: I1208 18:15:56.238245    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="86132242-3460-406f-9276-ad5d62038cd2" path="/var/lib/kubelet/pods/86132242-3460-406f-9276-ad5d62038cd2/volumes"
	
	* 
	* ==> storage-provisioner [0afc54229499cded21f7a7ff6d8237ce979555642822ab486ebb66c4fa43311a] <==
	* I1208 18:11:58.620374       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 18:11:58.630389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 18:11:58.630475       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 18:11:58.636266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 18:11:58.636366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a667e5e3-74ad-4bb9-9c4d-78582618c974", APIVersion:"v1", ResourceVersion:"882", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206 became leader
	I1208 18:11:58.636411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206!
	I1208 18:11:58.737200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-766826 -n addons-766826
helpers_test.go:261: (dbg) Run:  kubectl --context addons-766826 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (7.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p6mlj" [a5d3cab4-e1fb-498c-8fe2-8e820e09c7ec] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011072198s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-766826
addons_test.go:840: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-766826: exit status 11 (551.80448ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-08T18:13:28Z" level=error msg="stat /run/runc/8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:841: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-766826" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-766826
helpers_test.go:235: (dbg) docker inspect addons-766826:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788",
	        "Created": "2023-12-08T18:10:56.666593963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 345368,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-08T18:10:56.948123685Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7e83e141d5f1084600bb5c7d15c9e2fd69083458051c2cf9d21dfd6243a0ff9b",
	        "ResolvConfPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/hostname",
	        "HostsPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/hosts",
	        "LogPath": "/var/lib/docker/containers/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788-json.log",
	        "Name": "/addons-766826",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-766826:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-766826",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b-init/diff:/var/lib/docker/overlay2/f01fd4b86350391aeb4ddce306a73284c32c8168179c226f9bf8857f27cbe54b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b8c11fb1167c050add77cc46fdd254754faae617633474cfefb9e9c55fe786b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-766826",
	                "Source": "/var/lib/docker/volumes/addons-766826/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-766826",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-766826",
	                "name.minikube.sigs.k8s.io": "addons-766826",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822a4e1dc2929e050de2cb01d72854eda554c5cebb70a24475a0143ca1d46572",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/822a4e1dc292",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-766826": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "543daae92b3e",
	                        "addons-766826"
	                    ],
	                    "NetworkID": "e81a6b26a78ebb03e2e0e03e51afee0a8a4d0b13ed68dae384bb8b39b45b41b6",
	                    "EndpointID": "8c5edae405e0e69ef25f05f02022bf3ddbd04dc0eedd2f0098b9037dc7d3e67a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-766826 -n addons-766826
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 logs -n 25: (1.281532457s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | -p download-only-892064                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | -p download-only-892064                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | -p download-only-892064                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| delete  | -p download-only-892064                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| delete  | -p download-only-892064                                                                     | download-only-892064   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| start   | --download-only -p                                                                          | download-docker-819225 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | download-docker-819225                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-819225                                                                   | download-docker-819225 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-908328   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | binary-mirror-908328                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44187                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-908328                                                                     | binary-mirror-908328   | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:10 UTC |
	| addons  | enable dashboard -p                                                                         | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-766826 --wait=true                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC | 08 Dec 23 18:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-766826 ssh cat                                                                       | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | /opt/local-path-provisioner/pvc-de77890f-3fa6-42c6-805e-20b83a22f899_default_test-pvc/file1 |                        |         |         |                     |                     |
	| ip      | addons-766826 ip                                                                            | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-766826 addons disable                                                                | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-766826 addons                                                                        | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC | 08 Dec 23 18:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-766826          | jenkins | v1.32.0 | 08 Dec 23 18:13 UTC |                     |
	|         | addons-766826                                                                               |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:10:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:10:35.398019  344702 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:10:35.398146  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:35.398153  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:10:35.398158  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:35.398328  344702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:10:35.398918  344702 out.go:303] Setting JSON to false
	I1208 18:10:35.399763  344702 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6735,"bootTime":1702052300,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:10:35.399822  344702 start.go:138] virtualization: kvm guest
	I1208 18:10:35.401933  344702 out.go:177] * [addons-766826] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:10:35.403254  344702 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:10:35.403316  344702 notify.go:220] Checking for updates...
	I1208 18:10:35.404495  344702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:10:35.405732  344702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:10:35.406964  344702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:10:35.408228  344702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:10:35.409433  344702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:10:35.410772  344702 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:10:35.430876  344702 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:10:35.431013  344702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:35.479450  344702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:35.471411869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:35.479541  344702 docker.go:295] overlay module found
	I1208 18:10:35.481483  344702 out.go:177] * Using the docker driver based on user configuration
	I1208 18:10:35.482846  344702 start.go:298] selected driver: docker
	I1208 18:10:35.482866  344702 start.go:902] validating driver "docker" against <nil>
	I1208 18:10:35.482876  344702 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:10:35.483681  344702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:35.531533  344702 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:35.523779373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:35.531734  344702 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 18:10:35.531937  344702 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 18:10:35.533803  344702 out.go:177] * Using Docker driver with root privileges
	I1208 18:10:35.535160  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:10:35.535182  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:35.535195  344702 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 18:10:35.535218  344702 start_flags.go:323] config:
	{Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:10:35.536634  344702 out.go:177] * Starting control plane node addons-766826 in cluster addons-766826
	I1208 18:10:35.537797  344702 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:10:35.539036  344702 out.go:177] * Pulling base image ...
	I1208 18:10:35.540348  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:35.540405  344702 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:35.540416  344702 cache.go:56] Caching tarball of preloaded images
	I1208 18:10:35.540438  344702 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:10:35.540495  344702 preload.go:174] Found /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 18:10:35.540505  344702 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1208 18:10:35.540924  344702 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json ...
	I1208 18:10:35.540950  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json: {Name:mk1b44e8663c9d9f9ecd1a043dd0e150fd90a0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:10:35.554684  344702 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:35.554808  344702 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory
	I1208 18:10:35.554823  344702 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory, skipping pull
	I1208 18:10:35.554828  344702 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in cache, skipping pull
	I1208 18:10:35.554838  344702 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 as a tarball
	I1208 18:10:35.554843  344702 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 from local cache
	I1208 18:10:48.227022  344702 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 from cached tarball
	I1208 18:10:48.227072  344702 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:10:48.227171  344702 start.go:365] acquiring machines lock for addons-766826: {Name:mkd33173a289aa7ad362ea3ee90ba26cfce28fce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:10:48.227293  344702 start.go:369] acquired machines lock for "addons-766826" in 94.671µs
	I1208 18:10:48.227322  344702 start.go:93] Provisioning new machine with config: &{Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:10:48.227417  344702 start.go:125] createHost starting for "" (driver="docker")
	I1208 18:10:48.298822  344702 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1208 18:10:48.299136  344702 start.go:159] libmachine.API.Create for "addons-766826" (driver="docker")
	I1208 18:10:48.299170  344702 client.go:168] LocalClient.Create starting
	I1208 18:10:48.299324  344702 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem
	I1208 18:10:48.601721  344702 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem
	I1208 18:10:48.729267  344702 cli_runner.go:164] Run: docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 18:10:48.745586  344702 cli_runner.go:211] docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 18:10:48.745662  344702 network_create.go:281] running [docker network inspect addons-766826] to gather additional debugging logs...
	I1208 18:10:48.745683  344702 cli_runner.go:164] Run: docker network inspect addons-766826
	W1208 18:10:48.762060  344702 cli_runner.go:211] docker network inspect addons-766826 returned with exit code 1
	I1208 18:10:48.762092  344702 network_create.go:284] error running [docker network inspect addons-766826]: docker network inspect addons-766826: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-766826 not found
	I1208 18:10:48.762112  344702 network_create.go:286] output of [docker network inspect addons-766826]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-766826 not found
	
	** /stderr **
	I1208 18:10:48.762230  344702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:10:48.778846  344702 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002d4c200}
	I1208 18:10:48.778903  344702 network_create.go:124] attempt to create docker network addons-766826 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 18:10:48.778973  344702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-766826 addons-766826
	I1208 18:10:49.044212  344702 network_create.go:108] docker network addons-766826 192.168.49.0/24 created
	I1208 18:10:49.044255  344702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-766826" container
	I1208 18:10:49.044330  344702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 18:10:49.058979  344702 cli_runner.go:164] Run: docker volume create addons-766826 --label name.minikube.sigs.k8s.io=addons-766826 --label created_by.minikube.sigs.k8s.io=true
	I1208 18:10:49.163699  344702 oci.go:103] Successfully created a docker volume addons-766826
	I1208 18:10:49.163837  344702 cli_runner.go:164] Run: docker run --rm --name addons-766826-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --entrypoint /usr/bin/test -v addons-766826:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib
	I1208 18:10:51.365190  344702 cli_runner.go:217] Completed: docker run --rm --name addons-766826-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --entrypoint /usr/bin/test -v addons-766826:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib: (2.201295404s)
	I1208 18:10:51.365224  344702 oci.go:107] Successfully prepared a docker volume addons-766826
	I1208 18:10:51.365264  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:51.365291  344702 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 18:10:51.365349  344702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-766826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 18:10:56.604017  344702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-766826:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.238609551s)
	I1208 18:10:56.604048  344702 kic.go:203] duration metric: took 5.238755 seconds to extract preloaded images to volume
	W1208 18:10:56.604182  344702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 18:10:56.604276  344702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 18:10:56.652550  344702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-766826 --name addons-766826 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-766826 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-766826 --network addons-766826 --ip 192.168.49.2 --volume addons-766826:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:10:56.956167  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Running}}
	I1208 18:10:56.972820  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:56.990438  344702 cli_runner.go:164] Run: docker exec addons-766826 stat /var/lib/dpkg/alternatives/iptables
	I1208 18:10:57.050090  344702 oci.go:144] the created container "addons-766826" has a running status.
	I1208 18:10:57.050131  344702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa...
	I1208 18:10:57.373393  344702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 18:10:57.392698  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:57.408611  344702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 18:10:57.408636  344702 kic_runner.go:114] Args: [docker exec --privileged addons-766826 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 18:10:57.496117  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:10:57.513251  344702 machine.go:88] provisioning docker machine ...
	I1208 18:10:57.513328  344702 ubuntu.go:169] provisioning hostname "addons-766826"
	I1208 18:10:57.513449  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:57.534604  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:57.535208  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:57.535238  344702 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-766826 && echo "addons-766826" | sudo tee /etc/hostname
	I1208 18:10:57.669551  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-766826
	
	I1208 18:10:57.669639  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:57.688461  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:57.688818  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:57.688869  344702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-766826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-766826/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-766826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:10:57.810491  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:10:57.810520  344702 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:10:57.810557  344702 ubuntu.go:177] setting up certificates
	I1208 18:10:57.810573  344702 provision.go:83] configureAuth start
	I1208 18:10:57.810630  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:57.827338  344702 provision.go:138] copyHostCerts
	I1208 18:10:57.827414  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:10:57.827534  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:10:57.827607  344702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:10:57.827664  344702 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.addons-766826 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-766826]
	I1208 18:10:58.037009  344702 provision.go:172] copyRemoteCerts
	I1208 18:10:58.037084  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:10:58.037151  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.053414  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.142608  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:10:58.163569  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1208 18:10:58.184504  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 18:10:58.205867  344702 provision.go:86] duration metric: configureAuth took 395.27283ms
	I1208 18:10:58.205901  344702 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:10:58.206085  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:10:58.206207  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.222680  344702 main.go:141] libmachine: Using SSH client type: native
	I1208 18:10:58.223008  344702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33074 <nil> <nil>}
	I1208 18:10:58.223024  344702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:10:58.431598  344702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:10:58.431631  344702 machine.go:91] provisioned docker machine in 918.353023ms
	I1208 18:10:58.431643  344702 client.go:171] LocalClient.Create took 10.132465458s
	I1208 18:10:58.431666  344702 start.go:167] duration metric: libmachine.API.Create for "addons-766826" took 10.132532785s
	I1208 18:10:58.431709  344702 start.go:300] post-start starting for "addons-766826" (driver="docker")
	I1208 18:10:58.431725  344702 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:10:58.431808  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:10:58.431862  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.448069  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.539521  344702 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:10:58.542607  344702 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:10:58.542652  344702 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:10:58.542672  344702 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:10:58.542686  344702 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1208 18:10:58.542703  344702 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:10:58.542782  344702 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:10:58.542816  344702 start.go:303] post-start completed in 111.096153ms
	I1208 18:10:58.543275  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:58.560849  344702 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/config.json ...
	I1208 18:10:58.561143  344702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:10:58.561192  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.577486  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.663444  344702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:10:58.667582  344702 start.go:128] duration metric: createHost completed in 10.44014647s
	I1208 18:10:58.667610  344702 start.go:83] releasing machines lock for "addons-766826", held for 10.440304486s
	I1208 18:10:58.667684  344702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-766826
	I1208 18:10:58.683757  344702 ssh_runner.go:195] Run: cat /version.json
	I1208 18:10:58.683808  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.683849  344702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:10:58.683916  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:10:58.699728  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.701078  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:10:58.785963  344702 ssh_runner.go:195] Run: systemctl --version
	I1208 18:10:58.790076  344702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:10:58.927543  344702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:10:58.931811  344702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:10:58.949255  344702 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:10:58.949329  344702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:10:58.975521  344702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1208 18:10:58.975549  344702 start.go:475] detecting cgroup driver to use...
	I1208 18:10:58.975580  344702 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:10:58.975617  344702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:10:58.989892  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:10:58.999944  344702 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:10:58.999993  344702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:10:59.012312  344702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:10:59.024673  344702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 18:10:59.105246  344702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:10:59.179052  344702 docker.go:219] disabling docker service ...
	I1208 18:10:59.179107  344702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:10:59.196586  344702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:10:59.206693  344702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:10:59.279463  344702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:10:59.355864  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:10:59.365812  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:10:59.379389  344702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1208 18:10:59.379439  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.387854  344702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 18:10:59.387924  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.396805  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.405024  344702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:10:59.413445  344702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 18:10:59.421205  344702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 18:10:59.428490  344702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 18:10:59.435628  344702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 18:10:59.507579  344702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 18:10:59.596865  344702 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 18:10:59.596963  344702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 18:10:59.600306  344702 start.go:543] Will wait 60s for crictl version
	I1208 18:10:59.600353  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:10:59.603421  344702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 18:10:59.635598  344702 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1208 18:10:59.635700  344702 ssh_runner.go:195] Run: crio --version
	I1208 18:10:59.668608  344702 ssh_runner.go:195] Run: crio --version
	I1208 18:10:59.704008  344702 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1208 18:10:59.705641  344702 cli_runner.go:164] Run: docker network inspect addons-766826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:10:59.721774  344702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 18:10:59.725227  344702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:10:59.735439  344702 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:59.735498  344702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:10:59.791378  344702 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:10:59.791402  344702 crio.go:415] Images already preloaded, skipping extraction
	I1208 18:10:59.791449  344702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:10:59.822941  344702 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:10:59.822965  344702 cache_images.go:84] Images are preloaded, skipping loading
	I1208 18:10:59.823026  344702 ssh_runner.go:195] Run: crio config
	I1208 18:10:59.863332  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:10:59.863354  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:59.863380  344702 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1208 18:10:59.863401  344702 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-766826 NodeName:addons-766826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 18:10:59.863518  344702 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-766826"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 18:10:59.863574  344702 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-766826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1208 18:10:59.863621  344702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1208 18:10:59.871681  344702 binaries.go:44] Found k8s binaries, skipping transfer
	I1208 18:10:59.871743  344702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 18:10:59.879250  344702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1208 18:10:59.894835  344702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 18:10:59.910586  344702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1208 18:10:59.926440  344702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 18:10:59.929507  344702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:10:59.938941  344702 certs.go:56] Setting up /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826 for IP: 192.168.49.2
	I1208 18:10:59.938985  344702 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5abf3d3db90d2494e2d623a52fec5b2843f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:10:59.939117  344702 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key
	I1208 18:11:00.347543  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt ...
	I1208 18:11:00.347573  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt: {Name:mkebb9c5ec660f8fb0fbef25138a9307f3148dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.347743  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key ...
	I1208 18:11:00.347753  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key: {Name:mk73d5996c1cb7cf921d1e1a76c3fe7bb86b939e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.347818  344702 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key
	I1208 18:11:00.665798  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt ...
	I1208 18:11:00.665834  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt: {Name:mk0b4c28708e258b8bcb9b9d5175dc48cfb0f674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.666004  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key ...
	I1208 18:11:00.666015  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key: {Name:mk3cb7c4892d2ce7791c43b3da5dddfa48505634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.666115  344702 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key
	I1208 18:11:00.666128  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt with IP's: []
	I1208 18:11:00.990301  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt ...
	I1208 18:11:00.990339  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: {Name:mkacdf54e0bb0d02b559b4a566313eb2d9b0bf5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.990555  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key ...
	I1208 18:11:00.990573  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.key: {Name:mk56890f5b5a4234858be8e78aeac0be5f06b4f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:00.990653  344702 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2
	I1208 18:11:00.990668  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1208 18:11:01.119977  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 ...
	I1208 18:11:01.120008  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2: {Name:mk0176b352b16a5010d95b2c8e2593ced4cb0475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.120161  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2 ...
	I1208 18:11:01.120179  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2: {Name:mkbf46e45710c65630c3d9932836e6cd5d5904d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.120245  344702 certs.go:337] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt
	I1208 18:11:01.120308  344702 certs.go:341] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key
	I1208 18:11:01.120349  344702 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key
	I1208 18:11:01.120365  344702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt with IP's: []
	I1208 18:11:01.318728  344702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt ...
	I1208 18:11:01.318770  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt: {Name:mke99fb8b56ae3f85a7ddbddf047a306784da1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.318979  344702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key ...
	I1208 18:11:01.318998  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key: {Name:mk6a9744bbcfaa7ae2890dd4bb3528ea3cafdae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:01.319214  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 18:11:01.319260  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem (1082 bytes)
	I1208 18:11:01.319290  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem (1123 bytes)
	I1208 18:11:01.319317  344702 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem (1679 bytes)
	I1208 18:11:01.320069  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1208 18:11:01.342475  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 18:11:01.363813  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 18:11:01.384419  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 18:11:01.405349  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 18:11:01.426173  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 18:11:01.446820  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 18:11:01.467014  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 18:11:01.487776  344702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 18:11:01.508254  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 18:11:01.523286  344702 ssh_runner.go:195] Run: openssl version
	I1208 18:11:01.528147  344702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1208 18:11:01.536267  344702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.539262  344702 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.539309  344702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:11:01.545254  344702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1208 18:11:01.552995  344702 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1208 18:11:01.556162  344702 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:11:01.556249  344702 kubeadm.go:404] StartCluster: {Name:addons-766826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-766826 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:11:01.556338  344702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 18:11:01.556383  344702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 18:11:01.589317  344702 cri.go:89] found id: ""
	I1208 18:11:01.589385  344702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 18:11:01.597892  344702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 18:11:01.606145  344702 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1208 18:11:01.606213  344702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 18:11:01.614138  344702 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 18:11:01.614225  344702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 18:11:01.659847  344702 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1208 18:11:01.659925  344702 kubeadm.go:322] [preflight] Running pre-flight checks
	I1208 18:11:01.697087  344702 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:11:01.697179  344702 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:11:01.697231  344702 kubeadm.go:322] OS: Linux
	I1208 18:11:01.697295  344702 kubeadm.go:322] CGROUPS_CPU: enabled
	I1208 18:11:01.697366  344702 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1208 18:11:01.697457  344702 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1208 18:11:01.697512  344702 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1208 18:11:01.697563  344702 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1208 18:11:01.697613  344702 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1208 18:11:01.697688  344702 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1208 18:11:01.697763  344702 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1208 18:11:01.697855  344702 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1208 18:11:01.760710  344702 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 18:11:01.760873  344702 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 18:11:01.760981  344702 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 18:11:01.952375  344702 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 18:11:01.955791  344702 out.go:204]   - Generating certificates and keys ...
	I1208 18:11:01.955952  344702 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1208 18:11:01.956083  344702 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1208 18:11:02.123453  344702 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 18:11:02.236180  344702 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1208 18:11:02.368096  344702 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1208 18:11:02.524258  344702 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1208 18:11:02.705988  344702 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1208 18:11:02.706152  344702 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-766826 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:11:02.832331  344702 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1208 18:11:02.832499  344702 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-766826 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:11:03.006634  344702 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 18:11:03.056525  344702 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 18:11:03.149222  344702 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1208 18:11:03.149357  344702 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 18:11:03.252028  344702 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 18:11:03.363895  344702 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 18:11:03.577588  344702 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 18:11:03.742631  344702 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 18:11:03.743059  344702 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 18:11:03.746372  344702 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 18:11:03.748700  344702 out.go:204]   - Booting up control plane ...
	I1208 18:11:03.748828  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 18:11:03.748949  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 18:11:03.749032  344702 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 18:11:03.756427  344702 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:11:03.757221  344702 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:11:03.757286  344702 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1208 18:11:03.833133  344702 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 18:11:08.835507  344702 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002365 seconds
	I1208 18:11:08.835671  344702 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 18:11:08.848399  344702 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 18:11:09.373835  344702 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 18:11:09.374132  344702 kubeadm.go:322] [mark-control-plane] Marking the node addons-766826 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 18:11:09.884612  344702 kubeadm.go:322] [bootstrap-token] Using token: xgtwvu.3ufmvdlgrs1fk56u
	I1208 18:11:09.886228  344702 out.go:204]   - Configuring RBAC rules ...
	I1208 18:11:09.886375  344702 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 18:11:09.891194  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 18:11:09.897658  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 18:11:09.900370  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 18:11:09.903046  344702 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 18:11:09.905625  344702 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 18:11:09.915886  344702 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 18:11:10.133515  344702 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1208 18:11:10.325401  344702 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1208 18:11:10.326791  344702 kubeadm.go:322] 
	I1208 18:11:10.326893  344702 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1208 18:11:10.326905  344702 kubeadm.go:322] 
	I1208 18:11:10.327029  344702 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1208 18:11:10.327042  344702 kubeadm.go:322] 
	I1208 18:11:10.327079  344702 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1208 18:11:10.327174  344702 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 18:11:10.327246  344702 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 18:11:10.327257  344702 kubeadm.go:322] 
	I1208 18:11:10.327330  344702 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1208 18:11:10.327370  344702 kubeadm.go:322] 
	I1208 18:11:10.327455  344702 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 18:11:10.327469  344702 kubeadm.go:322] 
	I1208 18:11:10.327549  344702 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1208 18:11:10.327692  344702 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 18:11:10.327789  344702 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 18:11:10.327800  344702 kubeadm.go:322] 
	I1208 18:11:10.327939  344702 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 18:11:10.328054  344702 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1208 18:11:10.328077  344702 kubeadm.go:322] 
	I1208 18:11:10.328203  344702 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xgtwvu.3ufmvdlgrs1fk56u \
	I1208 18:11:10.328341  344702 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 \
	I1208 18:11:10.328370  344702 kubeadm.go:322] 	--control-plane 
	I1208 18:11:10.328377  344702 kubeadm.go:322] 
	I1208 18:11:10.328495  344702 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1208 18:11:10.328512  344702 kubeadm.go:322] 
	I1208 18:11:10.328646  344702 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xgtwvu.3ufmvdlgrs1fk56u \
	I1208 18:11:10.328799  344702 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:11:10.330441  344702 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:11:10.330614  344702 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:11:10.330656  344702 cni.go:84] Creating CNI manager for ""
	I1208 18:11:10.330667  344702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:11:10.333332  344702 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1208 18:11:10.334794  344702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 18:11:10.339231  344702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1208 18:11:10.339251  344702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1208 18:11:10.357227  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 18:11:11.050501  344702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 18:11:11.050618  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.050618  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b minikube.k8s.io/name=addons-766826 minikube.k8s.io/updated_at=2023_12_08T18_11_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.058024  344702 ops.go:34] apiserver oom_adj: -16
	I1208 18:11:11.144393  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.221268  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:11.790315  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:12.290585  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:12.789888  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:13.289766  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:13.790647  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:14.289940  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:14.789778  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:15.290435  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:15.789697  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:16.290570  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:16.790307  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:17.290483  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:17.790701  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:18.289955  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:18.790534  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:19.290068  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:19.790488  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:20.290678  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:20.790507  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:21.290724  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:21.790683  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.289697  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.789755  344702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:11:22.856614  344702 kubeadm.go:1088] duration metric: took 11.806048867s to wait for elevateKubeSystemPrivileges.
	I1208 18:11:22.856657  344702 kubeadm.go:406] StartCluster complete in 21.300414231s
	I1208 18:11:22.856680  344702 settings.go:142] acquiring lock: {Name:mkb1d8fbfd540ec0ff42a4ec77782a6addbbad21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:22.856780  344702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:11:22.857145  344702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/kubeconfig: {Name:mk170d1df5bab3a276f3bc17a718825dd5b16d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:11:22.857327  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 18:11:22.857461  344702 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1208 18:11:22.857550  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:11:22.857559  344702 addons.go:69] Setting gcp-auth=true in profile "addons-766826"
	I1208 18:11:22.857575  344702 addons.go:69] Setting volumesnapshots=true in profile "addons-766826"
	I1208 18:11:22.857582  344702 addons.go:69] Setting metrics-server=true in profile "addons-766826"
	I1208 18:11:22.857591  344702 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-766826"
	I1208 18:11:22.857597  344702 addons.go:69] Setting cloud-spanner=true in profile "addons-766826"
	I1208 18:11:22.857604  344702 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-766826"
	I1208 18:11:22.857606  344702 addons.go:69] Setting default-storageclass=true in profile "addons-766826"
	I1208 18:11:22.857616  344702 addons.go:231] Setting addon metrics-server=true in "addons-766826"
	I1208 18:11:22.857622  344702 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-766826"
	I1208 18:11:22.857627  344702 addons.go:231] Setting addon cloud-spanner=true in "addons-766826"
	I1208 18:11:22.857643  344702 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-766826"
	I1208 18:11:22.857656  344702 addons.go:69] Setting storage-provisioner=true in profile "addons-766826"
	I1208 18:11:22.857615  344702 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-766826"
	I1208 18:11:22.857673  344702 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-766826"
	I1208 18:11:22.857683  344702 addons.go:231] Setting addon storage-provisioner=true in "addons-766826"
	I1208 18:11:22.857689  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857704  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857724  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857728  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857763  344702 addons.go:69] Setting ingress-dns=true in profile "addons-766826"
	I1208 18:11:22.857777  344702 addons.go:231] Setting addon ingress-dns=true in "addons-766826"
	I1208 18:11:22.857816  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.857669  344702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-766826"
	I1208 18:11:22.857675  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.858101  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858210  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858229  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858229  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858248  344702 addons.go:69] Setting ingress=true in profile "addons-766826"
	I1208 18:11:22.858258  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858263  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858269  344702 addons.go:231] Setting addon ingress=true in "addons-766826"
	I1208 18:11:22.858314  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.858776  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858971  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.859107  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.859266  344702 addons.go:69] Setting registry=true in profile "addons-766826"
	I1208 18:11:22.859284  344702 addons.go:231] Setting addon registry=true in "addons-766826"
	I1208 18:11:22.859320  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.859683  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.857600  344702 mustload.go:65] Loading cluster: addons-766826
	I1208 18:11:22.860351  344702 config.go:182] Loaded profile config "addons-766826": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:11:22.860632  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.861209  344702 addons.go:69] Setting helm-tiller=true in profile "addons-766826"
	I1208 18:11:22.861238  344702 addons.go:231] Setting addon helm-tiller=true in "addons-766826"
	I1208 18:11:22.861279  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.861689  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.858778  344702 addons.go:69] Setting inspektor-gadget=true in profile "addons-766826"
	I1208 18:11:22.863568  344702 addons.go:231] Setting addon inspektor-gadget=true in "addons-766826"
	I1208 18:11:22.863654  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.864259  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.857596  344702 addons.go:231] Setting addon volumesnapshots=true in "addons-766826"
	I1208 18:11:22.864908  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.867403  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.891848  344702 out.go:177]   - Using image docker.io/registry:2.8.3
	I1208 18:11:22.893576  344702 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1208 18:11:22.895451  344702 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 18:11:22.895476  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1208 18:11:22.895536  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.900176  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 18:11:22.905964  344702 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-766826" context rescaled to 1 replicas
	I1208 18:11:22.907301  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 18:11:22.907468  344702 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1208 18:11:22.907505  344702 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:11:22.911780  344702 addons.go:231] Setting addon default-storageclass=true in "addons-766826"
	I1208 18:11:22.911859  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.912354  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.920243  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 18:11:22.920268  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 18:11:22.912577  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 18:11:22.920330  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.915605  344702 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-766826"
	I1208 18:11:22.922128  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 18:11:22.922331  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.925857  344702 out.go:177] * Verifying Kubernetes components...
	I1208 18:11:22.930884  344702 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1208 18:11:22.931330  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:22.932299  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:22.933059  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1208 18:11:22.933937  344702 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1208 18:11:22.937390  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:11:22.940223  344702 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 18:11:22.942508  344702 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1208 18:11:22.948187  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 18:11:22.948277  344702 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1208 18:11:22.949706  344702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:11:22.950325  344702 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1208 18:11:22.950340  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 18:11:22.951785  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1208 18:11:22.951858  344702 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1208 18:11:22.953570  344702 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 18:11:22.953587  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 18:11:22.953648  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.954689  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:22.955196  344702 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 18:11:22.955484  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:22.955650  344702 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:11:22.955661  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1208 18:11:22.957108  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.961525  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1208 18:11:22.961629  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 18:11:22.961708  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 18:11:22.961740  344702 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1208 18:11:22.961829  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 18:11:22.961914  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.963681  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 18:11:22.963695  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 18:11:22.963895  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1208 18:11:22.963907  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1208 18:11:22.963934  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:22.963958  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.963962  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.966934  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:22.965495  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.965569  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.965892  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.968544  344702 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 18:11:22.976603  344702 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 18:11:22.975604  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1208 18:11:22.979651  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 18:11:22.979840  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.982618  344702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 18:11:22.982745  344702 out.go:177]   - Using image docker.io/busybox:stable
	I1208 18:11:22.986355  344702 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 18:11:22.986377  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 18:11:22.986472  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.984324  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 18:11:22.991675  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 18:11:22.991758  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:22.994128  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.001506  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.007329  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.013343  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.016481  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.016540  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.017616  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.024344  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.024567  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 18:11:23.025424  344702 node_ready.go:35] waiting up to 6m0s for node "addons-766826" to be "Ready" ...
	I1208 18:11:23.028317  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.037063  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:23.043863  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	W1208 18:11:23.050647  344702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1208 18:11:23.050693  344702 retry.go:31] will retry after 324.404846ms: ssh: handshake failed: EOF
	I1208 18:11:23.319564  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:11:23.320203  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 18:11:23.320263  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 18:11:23.335703  344702 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 18:11:23.335732  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 18:11:23.429155  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 18:11:23.435135  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 18:11:23.522477  344702 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 18:11:23.522506  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 18:11:23.523104  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 18:11:23.528751  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 18:11:23.528831  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 18:11:23.535430  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 18:11:23.620106  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 18:11:23.620149  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 18:11:23.620453  344702 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1208 18:11:23.620483  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1208 18:11:23.628630  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1208 18:11:23.628655  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1208 18:11:23.629977  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 18:11:23.634436  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 18:11:23.634475  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 18:11:23.635859  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 18:11:23.821138  344702 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 18:11:23.821169  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 18:11:23.822911  344702 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1208 18:11:23.822941  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1208 18:11:23.835503  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 18:11:23.835539  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 18:11:23.923254  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 18:11:23.923361  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 18:11:23.930552  344702 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1208 18:11:23.930582  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1208 18:11:23.937900  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 18:11:24.031523  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 18:11:24.119962  344702 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1208 18:11:24.120040  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1208 18:11:24.120580  344702 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 18:11:24.120657  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 18:11:24.230034  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 18:11:24.230060  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 18:11:24.339019  344702 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1208 18:11:24.339055  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1208 18:11:24.526693  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 18:11:24.526729  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 18:11:24.535045  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1208 18:11:24.820398  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 18:11:24.820501  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 18:11:24.833412  344702 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1208 18:11:24.833447  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1208 18:11:24.926486  344702 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.901879474s)
	I1208 18:11:24.926641  344702 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1208 18:11:24.930530  344702 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:24.930559  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 18:11:25.035071  344702 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1208 18:11:25.035106  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1208 18:11:25.129836  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:25.140356  344702 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 18:11:25.140389  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 18:11:25.322023  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:25.434509  344702 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1208 18:11:25.434598  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1208 18:11:25.620440  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 18:11:25.620480  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 18:11:25.928996  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 18:11:25.929026  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 18:11:26.031827  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1208 18:11:26.437065  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 18:11:26.437156  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 18:11:26.822492  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 18:11:26.822573  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 18:11:27.127587  344702 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 18:11:27.127675  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 18:11:27.335536  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 18:11:27.633935  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:28.139438  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.819756101s)
	I1208 18:11:28.139560  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.710357431s)
	I1208 18:11:28.430271  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.995096897s)
	I1208 18:11:29.523725  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.000533906s)
	I1208 18:11:29.523771  344702 addons.go:467] Verifying addon ingress=true in "addons-766826"
	I1208 18:11:29.523777  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.893731945s)
	I1208 18:11:29.523842  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.887954205s)
	I1208 18:11:29.523865  344702 addons.go:467] Verifying addon registry=true in "addons-766826"
	I1208 18:11:29.523721  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.988246134s)
	I1208 18:11:29.526820  344702 out.go:177] * Verifying ingress addon...
	I1208 18:11:29.523956  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.586028363s)
	I1208 18:11:29.524021  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.492419581s)
	I1208 18:11:29.524049  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.988921383s)
	I1208 18:11:29.524154  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.202074476s)
	I1208 18:11:29.524214  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.492296865s)
	I1208 18:11:29.528428  344702 out.go:177] * Verifying registry addon...
	I1208 18:11:29.528474  344702 addons.go:467] Verifying addon metrics-server=true in "addons-766826"
	W1208 18:11:29.528505  344702 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 18:11:29.529853  344702 retry.go:31] will retry after 161.001524ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 18:11:29.529266  344702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 18:11:29.530687  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 18:11:29.535050  344702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 18:11:29.535074  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:29.535985  344702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 18:11:29.536005  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:29.538323  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:29.538839  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:29.690984  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 18:11:29.749396  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 18:11:29.749471  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:29.768835  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:29.938648  344702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 18:11:29.957921  344702 addons.go:231] Setting addon gcp-auth=true in "addons-766826"
	I1208 18:11:29.957995  344702 host.go:66] Checking if "addons-766826" exists ...
	I1208 18:11:29.958666  344702 cli_runner.go:164] Run: docker container inspect addons-766826 --format={{.State.Status}}
	I1208 18:11:29.978692  344702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 18:11:29.978751  344702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-766826
	I1208 18:11:29.994417  344702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33074 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/addons-766826/id_rsa Username:docker}
	I1208 18:11:30.045584  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:30.046183  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:30.046615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:30.420250  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.084602629s)
	I1208 18:11:30.420303  344702 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-766826"
	I1208 18:11:30.422066  344702 out.go:177] * Verifying csi-hostpath-driver addon...
	I1208 18:11:30.424881  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 18:11:30.428628  344702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 18:11:30.428653  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:30.432552  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:30.543342  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:30.543851  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:30.802386  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.111357938s)
	I1208 18:11:30.805455  344702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1208 18:11:30.807196  344702 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1208 18:11:30.808694  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 18:11:30.808712  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 18:11:30.825210  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 18:11:30.825244  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 18:11:30.841279  344702 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 18:11:30.841301  344702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1208 18:11:30.857041  344702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 18:11:30.937184  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.043681  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:31.044908  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:31.437771  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.543044  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:31.544240  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:31.925138  344702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.068047577s)
	I1208 18:11:31.926062  344702 addons.go:467] Verifying addon gcp-auth=true in "addons-766826"
	I1208 18:11:31.928934  344702 out.go:177] * Verifying gcp-auth addon...
	I1208 18:11:31.931333  344702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 18:11:31.934129  344702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 18:11:31.934153  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:31.938866  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:31.943240  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:32.042767  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:32.043034  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:32.437925  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:32.447324  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:32.620753  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:32.622077  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:32.623276  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:32.937023  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:32.947061  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:33.043370  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:33.045841  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:33.438129  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:33.446764  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:33.542955  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:33.544645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:33.936862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:33.947015  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:34.043224  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:34.043397  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:34.437577  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:34.447561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:34.543128  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:34.543386  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:34.936851  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:34.946292  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:35.043327  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:35.043381  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:35.044983  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:35.437473  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:35.447801  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:35.542563  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:35.542802  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:35.937356  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:35.946797  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:36.042394  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:36.042805  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:36.437280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:36.446750  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:36.543040  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:36.543374  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:36.937217  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:36.946869  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:37.042396  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:37.044297  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:37.437477  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:37.446967  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:37.542624  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:37.542697  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:37.544093  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:37.937228  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:37.946541  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:38.041987  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:38.042956  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:38.437517  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:38.446904  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:38.542378  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:38.542621  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:38.936763  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:38.946264  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:39.042733  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:39.043055  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:39.436344  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:39.446658  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:39.542395  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:39.542994  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:39.937109  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:39.946554  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:40.042167  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:40.042756  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:40.044508  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:40.436659  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:40.446231  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:40.543027  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:40.543322  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:40.936645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:40.947045  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:41.042625  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:41.042900  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:41.437561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:41.446771  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:41.542323  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:41.542375  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:41.937325  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:41.946645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:42.042231  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:42.043242  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:42.436552  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:42.447137  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:42.542766  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:42.542884  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:42.544381  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:42.936365  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:42.946829  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:43.042115  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:43.042433  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:43.437042  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:43.446272  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:43.542752  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:43.543229  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:43.936546  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:43.947191  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:44.042889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:44.043609  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:44.436995  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:44.446173  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:44.542655  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:44.543004  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:44.937018  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:44.946301  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:45.042731  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:45.043385  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:45.044605  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:45.436926  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:45.448054  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:45.542363  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:45.542770  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:45.936960  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:45.946348  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:46.043014  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:46.043476  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:46.436612  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:46.446853  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:46.542110  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:46.542272  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:46.937168  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:46.946238  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:47.043019  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:47.043032  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:47.437600  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:47.446851  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:47.542437  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:47.542729  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:47.543965  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:47.936877  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:47.946107  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:48.042364  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:48.042655  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:48.436871  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:48.446474  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:48.542908  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:48.543246  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:48.936410  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:48.947265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:49.043336  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:49.043552  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:49.437067  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:49.446610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:49.541881  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:49.542680  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:49.544141  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:49.937219  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:49.946637  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:50.042391  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:50.043104  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:50.436770  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:50.447044  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:50.542633  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:50.542908  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:50.936975  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:50.946235  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:51.042570  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:51.043024  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:51.437297  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:51.446780  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:51.542075  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:51.542358  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:51.937200  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:51.946656  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:52.042170  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:52.042873  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:52.044433  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:52.437507  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:52.446785  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:52.541968  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:52.542222  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:52.936785  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:52.946062  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:53.042491  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:53.042758  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:53.437171  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:53.446414  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:53.544668  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:53.544791  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:53.938403  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:53.947316  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:54.042741  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:54.043234  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:54.044468  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:54.436395  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:54.446791  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:54.542668  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:54.542971  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:54.937328  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:54.946813  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:55.042631  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:55.043227  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:55.436512  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:55.446548  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:55.542837  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:55.542999  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:55.936482  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:55.946790  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:56.042578  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:56.042627  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:56.436585  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:56.446915  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:56.542237  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:56.542584  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:56.544181  344702 node_ready.go:58] node "addons-766826" has status "Ready":"False"
	I1208 18:11:56.937289  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:56.946761  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:57.042104  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:57.042965  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:57.436707  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:57.445943  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:57.542519  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:57.542785  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:57.938832  344702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 18:11:57.938862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:57.946618  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:58.043176  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:58.043525  344702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 18:11:58.043549  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:58.044608  344702 node_ready.go:49] node "addons-766826" has status "Ready":"True"
	I1208 18:11:58.044630  344702 node_ready.go:38] duration metric: took 35.019178359s waiting for node "addons-766826" to be "Ready" ...
	I1208 18:11:58.044639  344702 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:11:58.053757  344702 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:58.437828  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:58.448202  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:58.543888  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:58.544561  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:58.940432  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:58.947135  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:59.043026  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:59.043192  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:59.438650  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:59.446693  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:11:59.542992  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:11:59.544441  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:11:59.574822  344702 pod_ready.go:92] pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.574853  344702 pod_ready.go:81] duration metric: took 1.521068599s waiting for pod "coredns-5dd5756b68-gr7cp" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.574881  344702 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.579929  344702 pod_ready.go:92] pod "etcd-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.579951  344702 pod_ready.go:81] duration metric: took 5.060841ms waiting for pod "etcd-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.579966  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.621520  344702 pod_ready.go:92] pod "kube-apiserver-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.621545  344702 pod_ready.go:81] duration metric: took 41.570164ms waiting for pod "kube-apiserver-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.621558  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.645545  344702 pod_ready.go:92] pod "kube-controller-manager-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:11:59.645570  344702 pod_ready.go:81] duration metric: took 24.003196ms waiting for pod "kube-controller-manager-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.645585  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sqqhb" in "kube-system" namespace to be "Ready" ...
	I1208 18:11:59.938241  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:11:59.946937  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:00.043522  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:00.043651  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:00.044765  344702 pod_ready.go:92] pod "kube-proxy-sqqhb" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:00.044784  344702 pod_ready.go:81] duration metric: took 399.192062ms waiting for pod "kube-proxy-sqqhb" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.044796  344702 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.439650  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:00.445728  344702 pod_ready.go:92] pod "kube-scheduler-addons-766826" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:00.445752  344702 pod_ready.go:81] duration metric: took 400.948098ms waiting for pod "kube-scheduler-addons-766826" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.445765  344702 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:00.446969  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:00.544249  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:00.544779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:00.939505  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:00.947615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:01.043358  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:01.044112  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:01.438098  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:01.446773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:01.543322  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:01.544030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:01.939228  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:01.947685  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:02.043094  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:02.044963  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:02.438965  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:02.447380  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:02.543264  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:02.543756  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:02.752935  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:02.938867  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:02.947590  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:03.043647  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:03.044356  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:03.439928  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:03.447004  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:03.543472  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:03.543593  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:03.939241  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:03.946764  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:04.043438  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:04.043587  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:04.439163  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:04.447638  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:04.544629  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:04.544703  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:04.753562  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:04.939910  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:04.947210  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:05.043587  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:05.043663  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:05.439623  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:05.446894  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:05.542981  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:05.546325  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:05.938593  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:05.947831  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:06.042786  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:06.043814  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:06.438046  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:06.446906  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:06.543082  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:06.543232  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:06.938156  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:06.947229  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:07.042884  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:07.043007  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:07.252014  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:07.438711  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:07.446669  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:07.543112  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:07.543846  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:07.938208  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:07.947367  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:08.044096  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:08.044156  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:08.438151  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:08.447314  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:08.543169  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:08.543182  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:08.937942  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:08.947244  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:09.043666  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:09.043898  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:09.252883  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:09.437366  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:09.446303  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:09.543327  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:09.543635  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:09.939007  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:09.946631  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:10.043897  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:10.044041  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:10.441168  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:10.447071  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:10.542867  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:10.543073  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:10.937675  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:10.946954  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:11.043299  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:11.043349  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:11.438616  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:11.447507  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:11.544194  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:11.544456  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:11.824989  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:11.941355  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:11.952050  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:12.044488  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:12.045530  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:12.439022  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:12.447090  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:12.543389  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:12.543460  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:12.940962  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:12.947691  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:13.043971  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:13.043973  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:13.439129  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:13.447515  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:13.543442  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:13.543898  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:13.937807  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:13.947053  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:14.043402  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:14.043814  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:14.252658  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:14.439179  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:14.446008  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:14.542849  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:14.542959  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:14.938525  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:14.947637  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:15.043804  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:15.044385  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:15.438424  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:15.448030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:15.543094  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:15.543139  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:15.938888  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:15.946575  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:16.043816  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:16.044111  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:16.438150  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:16.446540  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:16.543701  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:16.543704  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:16.751742  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:16.937434  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:16.946832  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:17.043208  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:17.044182  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:17.437773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:17.446929  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:17.542194  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:17.543270  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:17.938658  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:17.946474  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:18.043776  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:18.043889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:18.438093  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:18.447302  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:18.543362  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:18.543400  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:18.751980  344702 pod_ready.go:102] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:18.938393  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:18.946914  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:19.045091  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:19.045269  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:19.438600  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:19.447751  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:19.542844  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:19.544813  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:19.939571  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:19.947664  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:20.043293  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:20.044252  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:20.438773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:20.446675  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:20.543808  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:20.543842  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:20.756380  344702 pod_ready.go:92] pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:20.756416  344702 pod_ready.go:81] duration metric: took 20.310639375s waiting for pod "metrics-server-7c66d45ddc-zrxqf" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:20.756432  344702 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:20.938280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:20.946672  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:21.043464  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:21.043718  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:21.439306  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:21.447627  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:21.543679  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:21.543757  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:21.938889  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:21.946952  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:22.042735  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:22.043531  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:22.438103  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:22.447076  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:22.543092  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:22.543180  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:22.841026  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:22.938501  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:22.946030  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:23.042883  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:23.042952  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:23.437710  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:23.446497  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:23.543152  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:23.543477  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:23.939254  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:23.947430  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:24.043865  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:24.043886  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:24.437796  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:24.447111  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:24.543730  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:24.545067  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 18:12:24.937870  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:24.946888  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:25.043092  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:25.043473  344702 kapi.go:107] duration metric: took 55.51278648s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 18:12:25.339704  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:25.437683  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:25.446654  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:25.541972  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:25.938957  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:25.946613  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:26.043417  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:26.440779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:26.447258  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:26.543799  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:26.939444  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:26.947191  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:27.043881  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:27.340561  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:27.439772  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:27.446972  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:27.543482  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:27.938332  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:27.947610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:28.043475  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:28.440483  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:28.447521  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:28.544074  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:28.939424  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:28.947034  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:29.044600  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:29.439231  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:29.446780  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:29.542932  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:29.840035  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:29.938025  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:29.947344  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:30.043774  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:30.438187  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:30.447382  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:30.543186  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:30.937989  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:30.946862  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:31.042735  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:31.438033  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:31.446592  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:31.543906  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:31.926122  344702 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"False"
	I1208 18:12:31.938725  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.026017  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:32.044559  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:32.340748  344702 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace has status "Ready":"True"
	I1208 18:12:32.340778  344702 pod_ready.go:81] duration metric: took 11.584337358s waiting for pod "nvidia-device-plugin-daemonset-2vjv7" in "kube-system" namespace to be "Ready" ...
	I1208 18:12:32.340801  344702 pod_ready.go:38] duration metric: took 34.296151939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:12:32.340827  344702 api_server.go:52] waiting for apiserver process to appear ...
	I1208 18:12:32.340867  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:32.340925  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:32.439012  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.447610  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:32.454247  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:32.454287  344702 cri.go:89] found id: ""
	I1208 18:12:32.454300  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:32.454364  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.522663  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:32.522743  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:32.544401  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:32.734753  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:32.734782  344702 cri.go:89] found id: ""
	I1208 18:12:32.734793  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:32.734872  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.747429  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:32.747504  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:32.939163  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:32.939193  344702 cri.go:89] found id: ""
	I1208 18:12:32.939203  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:32.939260  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:32.943021  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:32.943092  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:32.951149  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:32.951670  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.046091  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:33.144957  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:33.145044  344702 cri.go:89] found id: ""
	I1208 18:12:33.145058  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:33.145138  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.149124  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:33.149192  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:33.321529  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:33.321556  344702 cri.go:89] found id: ""
	I1208 18:12:33.321567  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:33.321620  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.325674  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:33.325743  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:33.429162  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:33.429194  344702 cri.go:89] found id: ""
	I1208 18:12:33.429206  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:33.429266  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.433226  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:33.433311  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:33.441336  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:33.447395  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.535052  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:33.535083  344702 cri.go:89] found id: ""
	I1208 18:12:33.535095  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:33.535154  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:33.539543  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:33.539580  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:33.544423  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:33.657142  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:33.657194  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:33.754798  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:33.754839  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:33.860488  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:33.860527  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:33.938312  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:33.947661  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:33.951488  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:33.951516  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:34.042945  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:34.101158  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:34.101200  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:34.152441  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:34.152477  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:34.183203  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:34.183247  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:34.347816  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:34.347850  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:34.382731  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:34.382761  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:34.438699  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:34.447019  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:34.487396  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:34.487436  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 18:12:34.543183  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1208 18:12:34.573390  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:34.573569  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:34.608206  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:34.608250  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:34.670686  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:34.670720  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:34.670843  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:34.670860  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:34.670870  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:34.670884  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:34.670901  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:12:34.937755  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:34.946674  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:35.042214  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:35.438645  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:35.446614  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:35.543281  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:35.938779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:35.946828  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:36.042624  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:36.439052  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:36.447187  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:36.543270  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:36.937615  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:36.947194  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:37.043141  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:37.437450  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:37.446337  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:37.543009  344702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 18:12:37.939736  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:37.947271  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:38.042951  344702 kapi.go:107] duration metric: took 1m8.513674023s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 18:12:38.437982  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:38.447197  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 18:12:38.938398  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:38.946201  344702 kapi.go:107] duration metric: took 1m7.01486797s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 18:12:38.948237  344702 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-766826 cluster.
	I1208 18:12:38.949806  344702 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 18:12:38.951335  344702 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 18:12:39.438265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:39.938280  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:40.438104  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:40.940896  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:41.438779  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:41.938639  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:42.437583  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:42.939922  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:43.441903  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:43.938389  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.438265  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.672353  344702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 18:12:44.687134  344702 api_server.go:72] duration metric: took 1m21.774518861s to wait for apiserver process to appear ...
	I1208 18:12:44.687161  344702 api_server.go:88] waiting for apiserver healthz status ...
	I1208 18:12:44.687201  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:44.687259  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:44.726545  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:44.726573  344702 cri.go:89] found id: ""
	I1208 18:12:44.726585  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:44.726634  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.729967  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:44.730034  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:44.766801  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:44.766826  344702 cri.go:89] found id: ""
	I1208 18:12:44.766836  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:44.766894  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.770317  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:44.770392  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:44.837785  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:44.837808  344702 cri.go:89] found id: ""
	I1208 18:12:44.837816  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:44.837869  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.841311  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:44.841379  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:44.874200  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:44.874235  344702 cri.go:89] found id: ""
	I1208 18:12:44.874246  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:44.874311  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.877648  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:44.877720  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:44.938698  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:44.951460  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:44.951480  344702 cri.go:89] found id: ""
	I1208 18:12:44.951488  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:44.951537  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:44.955253  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:44.955317  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:45.035041  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:45.035071  344702 cri.go:89] found id: ""
	I1208 18:12:45.035082  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:45.035131  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:45.038532  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:45.038602  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:45.079565  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:45.079592  344702 cri.go:89] found id: ""
	I1208 18:12:45.079601  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:45.079656  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:45.083042  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:45.083061  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:45.184280  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:45.184323  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:45.233023  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:45.233054  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:45.261532  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:45.261566  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:45.340782  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:45.340825  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:45.384391  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:45.384428  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:45.437981  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:45.459037  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:45.459080  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:45.534202  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:45.534239  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:45.571208  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:45.571237  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:45.688748  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:45.688787  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:45.740228  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:45.740261  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1208 18:12:45.788459  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:45.788631  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:45.829017  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:45.829063  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:45.937773  344702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 18:12:45.961327  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:45.961360  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:45.961425  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:45.961441  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:45.961457  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:45.961471  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:45.961483  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:12:46.438073  344702 kapi.go:107] duration metric: took 1m16.013187384s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 18:12:46.440234  344702 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, inspektor-gadget, cloud-spanner, helm-tiller, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1208 18:12:46.441740  344702 addons.go:502] enable addons completed in 1m23.584281532s: enabled=[storage-provisioner ingress-dns storage-provisioner-rancher nvidia-device-plugin inspektor-gadget cloud-spanner helm-tiller metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1208 18:12:55.962705  344702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1208 18:12:55.968101  344702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1208 18:12:55.969183  344702 api_server.go:141] control plane version: v1.28.4
	I1208 18:12:55.969207  344702 api_server.go:131] duration metric: took 11.282039542s to wait for apiserver health ...
	I1208 18:12:55.969216  344702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 18:12:55.969239  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1208 18:12:55.969287  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 18:12:56.002589  344702 cri.go:89] found id: "c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:56.002609  344702 cri.go:89] found id: ""
	I1208 18:12:56.002618  344702 logs.go:284] 1 containers: [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae]
	I1208 18:12:56.002669  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.005907  344702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1208 18:12:56.005986  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 18:12:56.039317  344702 cri.go:89] found id: "4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:56.039339  344702 cri.go:89] found id: ""
	I1208 18:12:56.039347  344702 logs.go:284] 1 containers: [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065]
	I1208 18:12:56.039401  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.042640  344702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1208 18:12:56.042693  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 18:12:56.075357  344702 cri.go:89] found id: "cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:56.075385  344702 cri.go:89] found id: ""
	I1208 18:12:56.075399  344702 logs.go:284] 1 containers: [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76]
	I1208 18:12:56.075457  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.078654  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1208 18:12:56.078767  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 18:12:56.110980  344702 cri.go:89] found id: "6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:56.111001  344702 cri.go:89] found id: ""
	I1208 18:12:56.111009  344702 logs.go:284] 1 containers: [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34]
	I1208 18:12:56.111057  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.114289  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1208 18:12:56.114343  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 18:12:56.148911  344702 cri.go:89] found id: "2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:56.148933  344702 cri.go:89] found id: ""
	I1208 18:12:56.148941  344702 logs.go:284] 1 containers: [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946]
	I1208 18:12:56.148981  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.152447  344702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 18:12:56.152505  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 18:12:56.185509  344702 cri.go:89] found id: "cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:56.185538  344702 cri.go:89] found id: ""
	I1208 18:12:56.185548  344702 logs.go:284] 1 containers: [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763]
	I1208 18:12:56.185598  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.188968  344702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1208 18:12:56.189043  344702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 18:12:56.222226  344702 cri.go:89] found id: "6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:56.222256  344702 cri.go:89] found id: ""
	I1208 18:12:56.222275  344702 logs.go:284] 1 containers: [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7]
	I1208 18:12:56.222329  344702 ssh_runner.go:195] Run: which crictl
	I1208 18:12:56.225689  344702 logs.go:123] Gathering logs for coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] ...
	I1208 18:12:56.225719  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76"
	I1208 18:12:56.263810  344702 logs.go:123] Gathering logs for kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] ...
	I1208 18:12:56.263841  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34"
	I1208 18:12:56.302691  344702 logs.go:123] Gathering logs for container status ...
	I1208 18:12:56.302723  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 18:12:56.345691  344702 logs.go:123] Gathering logs for kubelet ...
	I1208 18:12:56.345725  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1208 18:12:56.391606  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:56.391782  344702 logs.go:138] Found kubelet problem: Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:56.425899  344702 logs.go:123] Gathering logs for dmesg ...
	I1208 18:12:56.425937  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 18:12:56.454115  344702 logs.go:123] Gathering logs for etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] ...
	I1208 18:12:56.454153  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065"
	I1208 18:12:56.495038  344702 logs.go:123] Gathering logs for kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] ...
	I1208 18:12:56.495075  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763"
	I1208 18:12:56.553257  344702 logs.go:123] Gathering logs for kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] ...
	I1208 18:12:56.553295  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7"
	I1208 18:12:56.586270  344702 logs.go:123] Gathering logs for CRI-O ...
	I1208 18:12:56.586303  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1208 18:12:56.660408  344702 logs.go:123] Gathering logs for describe nodes ...
	I1208 18:12:56.660444  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1208 18:12:56.757232  344702 logs.go:123] Gathering logs for kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] ...
	I1208 18:12:56.757263  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae"
	I1208 18:12:56.800233  344702 logs.go:123] Gathering logs for kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] ...
	I1208 18:12:56.800265  344702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946"
	I1208 18:12:56.833966  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:56.833992  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1208 18:12:56.834062  344702 out.go:239] X Problems detected in kubelet:
	W1208 18:12:56.834076  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: W1208 18:11:23.434776    1561 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	W1208 18:12:56.834086  344702 out.go:239]   Dec 08 18:11:23 addons-766826 kubelet[1561]: E1208 18:11:23.434824    1561 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-766826" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-766826' and this object
	I1208 18:12:56.834099  344702 out.go:309] Setting ErrFile to fd 2...
	I1208 18:12:56.834112  344702 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:13:06.844971  344702 system_pods.go:59] 19 kube-system pods found
	I1208 18:13:06.845002  344702 system_pods.go:61] "coredns-5dd5756b68-gr7cp" [d095f129-9a95-4ac0-bb7a-12d2353223cd] Running
	I1208 18:13:06.845007  344702 system_pods.go:61] "csi-hostpath-attacher-0" [19f35feb-6448-4e6f-b49b-c670972cc314] Running
	I1208 18:13:06.845011  344702 system_pods.go:61] "csi-hostpath-resizer-0" [970d12ad-fd8f-4488-bca9-9f3d9e3bcb98] Running
	I1208 18:13:06.845015  344702 system_pods.go:61] "csi-hostpathplugin-nffnm" [e337b550-055c-424a-af38-ba18f2f436de] Running
	I1208 18:13:06.845019  344702 system_pods.go:61] "etcd-addons-766826" [d4666697-78e7-4a9b-9317-9511a0005ade] Running
	I1208 18:13:06.845022  344702 system_pods.go:61] "kindnet-bdq5w" [e1139c0f-a09a-4fce-9e52-95a17bc4b151] Running
	I1208 18:13:06.845026  344702 system_pods.go:61] "kube-apiserver-addons-766826" [35c9f38c-7943-42b2-acc3-4819e39b15a1] Running
	I1208 18:13:06.845030  344702 system_pods.go:61] "kube-controller-manager-addons-766826" [d6aa247e-faad-4782-8377-f8d2255ea109] Running
	I1208 18:13:06.845037  344702 system_pods.go:61] "kube-ingress-dns-minikube" [4dbe76f2-999f-4e8a-beac-8c4693152b8f] Running
	I1208 18:13:06.845040  344702 system_pods.go:61] "kube-proxy-sqqhb" [b59bf415-faa1-43be-8604-f2e271f4257a] Running
	I1208 18:13:06.845044  344702 system_pods.go:61] "kube-scheduler-addons-766826" [1d43bdbc-d587-443a-a9d0-9ec51334900a] Running
	I1208 18:13:06.845049  344702 system_pods.go:61] "metrics-server-7c66d45ddc-zrxqf" [96be6ea9-f7ed-447e-96f0-2de2852c5689] Running
	I1208 18:13:06.845053  344702 system_pods.go:61] "nvidia-device-plugin-daemonset-2vjv7" [fbd353d3-71e8-4b51-9170-9716493afe0b] Running
	I1208 18:13:06.845057  344702 system_pods.go:61] "registry-n29ff" [51d60be4-1fcd-4243-a9f5-b01f0c18e985] Running
	I1208 18:13:06.845063  344702 system_pods.go:61] "registry-proxy-pg8rp" [831df691-d6e7-47e4-81c5-ec68788fcdb4] Running
	I1208 18:13:06.845067  344702 system_pods.go:61] "snapshot-controller-58dbcc7b99-dnszh" [24781340-e2e3-49b0-815a-7d325d7e1212] Running
	I1208 18:13:06.845073  344702 system_pods.go:61] "snapshot-controller-58dbcc7b99-f7f7s" [fd8ba136-b4ca-4eb4-a724-daa214f987ce] Running
	I1208 18:13:06.845077  344702 system_pods.go:61] "storage-provisioner" [797fe11e-ddc2-494c-b345-9391a39ae877] Running
	I1208 18:13:06.845082  344702 system_pods.go:61] "tiller-deploy-7b677967b9-lf6zk" [bb5789f0-c460-44b1-8cef-9b34b3892cf5] Running
	I1208 18:13:06.845088  344702 system_pods.go:74] duration metric: took 10.875866123s to wait for pod list to return data ...
	I1208 18:13:06.845099  344702 default_sa.go:34] waiting for default service account to be created ...
	I1208 18:13:06.847373  344702 default_sa.go:45] found service account: "default"
	I1208 18:13:06.847400  344702 default_sa.go:55] duration metric: took 2.294923ms for default service account to be created ...
	I1208 18:13:06.847408  344702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 18:13:06.856731  344702 system_pods.go:86] 19 kube-system pods found
	I1208 18:13:06.856759  344702 system_pods.go:89] "coredns-5dd5756b68-gr7cp" [d095f129-9a95-4ac0-bb7a-12d2353223cd] Running
	I1208 18:13:06.856765  344702 system_pods.go:89] "csi-hostpath-attacher-0" [19f35feb-6448-4e6f-b49b-c670972cc314] Running
	I1208 18:13:06.856768  344702 system_pods.go:89] "csi-hostpath-resizer-0" [970d12ad-fd8f-4488-bca9-9f3d9e3bcb98] Running
	I1208 18:13:06.856772  344702 system_pods.go:89] "csi-hostpathplugin-nffnm" [e337b550-055c-424a-af38-ba18f2f436de] Running
	I1208 18:13:06.856776  344702 system_pods.go:89] "etcd-addons-766826" [d4666697-78e7-4a9b-9317-9511a0005ade] Running
	I1208 18:13:06.856782  344702 system_pods.go:89] "kindnet-bdq5w" [e1139c0f-a09a-4fce-9e52-95a17bc4b151] Running
	I1208 18:13:06.856786  344702 system_pods.go:89] "kube-apiserver-addons-766826" [35c9f38c-7943-42b2-acc3-4819e39b15a1] Running
	I1208 18:13:06.856790  344702 system_pods.go:89] "kube-controller-manager-addons-766826" [d6aa247e-faad-4782-8377-f8d2255ea109] Running
	I1208 18:13:06.856794  344702 system_pods.go:89] "kube-ingress-dns-minikube" [4dbe76f2-999f-4e8a-beac-8c4693152b8f] Running
	I1208 18:13:06.856798  344702 system_pods.go:89] "kube-proxy-sqqhb" [b59bf415-faa1-43be-8604-f2e271f4257a] Running
	I1208 18:13:06.856802  344702 system_pods.go:89] "kube-scheduler-addons-766826" [1d43bdbc-d587-443a-a9d0-9ec51334900a] Running
	I1208 18:13:06.856806  344702 system_pods.go:89] "metrics-server-7c66d45ddc-zrxqf" [96be6ea9-f7ed-447e-96f0-2de2852c5689] Running
	I1208 18:13:06.856810  344702 system_pods.go:89] "nvidia-device-plugin-daemonset-2vjv7" [fbd353d3-71e8-4b51-9170-9716493afe0b] Running
	I1208 18:13:06.856815  344702 system_pods.go:89] "registry-n29ff" [51d60be4-1fcd-4243-a9f5-b01f0c18e985] Running
	I1208 18:13:06.856818  344702 system_pods.go:89] "registry-proxy-pg8rp" [831df691-d6e7-47e4-81c5-ec68788fcdb4] Running
	I1208 18:13:06.856822  344702 system_pods.go:89] "snapshot-controller-58dbcc7b99-dnszh" [24781340-e2e3-49b0-815a-7d325d7e1212] Running
	I1208 18:13:06.856826  344702 system_pods.go:89] "snapshot-controller-58dbcc7b99-f7f7s" [fd8ba136-b4ca-4eb4-a724-daa214f987ce] Running
	I1208 18:13:06.856830  344702 system_pods.go:89] "storage-provisioner" [797fe11e-ddc2-494c-b345-9391a39ae877] Running
	I1208 18:13:06.856833  344702 system_pods.go:89] "tiller-deploy-7b677967b9-lf6zk" [bb5789f0-c460-44b1-8cef-9b34b3892cf5] Running
	I1208 18:13:06.856839  344702 system_pods.go:126] duration metric: took 9.426847ms to wait for k8s-apps to be running ...
	I1208 18:13:06.856845  344702 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 18:13:06.856890  344702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:13:06.868404  344702 system_svc.go:56] duration metric: took 11.546991ms WaitForService to wait for kubelet.
	I1208 18:13:06.868434  344702 kubeadm.go:581] duration metric: took 1m43.955826122s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1208 18:13:06.868462  344702 node_conditions.go:102] verifying NodePressure condition ...
	I1208 18:13:06.871556  344702 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:13:06.871604  344702 node_conditions.go:123] node cpu capacity is 8
	I1208 18:13:06.871619  344702 node_conditions.go:105] duration metric: took 3.15144ms to run NodePressure ...
	I1208 18:13:06.871630  344702 start.go:228] waiting for startup goroutines ...
	I1208 18:13:06.871641  344702 start.go:233] waiting for cluster config update ...
	I1208 18:13:06.871655  344702 start.go:242] writing updated cluster config ...
	I1208 18:13:06.871895  344702 ssh_runner.go:195] Run: rm -f paused
	I1208 18:13:06.921602  344702 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1208 18:13:06.925500  344702 out.go:177] * Done! kubectl is now configured to use "addons-766826" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 08 18:13:27 addons-766826 crio[949]: time="2023-12-08 18:13:27.836658091Z" level=info msg="Deleting pod kube-system_metrics-server-7c66d45ddc-zrxqf from CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:13:27 addons-766826 crio[949]: time="2023-12-08 18:13:27.922316471Z" level=info msg="Stopped pod sandbox: bda8242772b869bb6f288ecb9d64d6b639be671e69263ed1f3f92cb2ffa6eb8d" id=4bd821be-73f5-4ad9-b9de-7d9196f5e848 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.142925370Z" level=info msg="Removing container: 8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049" id=1d0546e4-11a3-485a-8fc8-805c7f04dae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.220322546Z" level=info msg="Running pod sandbox: default/nginx/POD" id=9dcc33aa-f8ca-48a5-b83c-88963629de4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.220395371Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.221574926Z" level=info msg="Removed container 8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049: kube-system/metrics-server-7c66d45ddc-zrxqf/metrics-server" id=1d0546e4-11a3-485a-8fc8-805c7f04dae9 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.240197838Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5 UID:2f43e8f3-b864-47fb-9ed0-22d1c06b4980 NetNS:/var/run/netns/d43ae98d-a7d6-41bf-80af-eccece84cdb3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.240227984Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.249669885Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5 UID:2f43e8f3-b864-47fb-9ed0-22d1c06b4980 NetNS:/var/run/netns/d43ae98d-a7d6-41bf-80af-eccece84cdb3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.249866938Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.282046833Z" level=info msg="Ran pod sandbox 3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5 with infra container: default/nginx/POD" id=9dcc33aa-f8ca-48a5-b83c-88963629de4d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.283269981Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8f4b6eb7-9d93-494d-9966-ef9677dc6ae9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.283496634Z" level=info msg="Image docker.io/nginx:alpine not found" id=8f4b6eb7-9d93-494d-9966-ef9677dc6ae9 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.284560976Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=ff785fc2-b6a4-4a67-b6ed-a10bc98c4822 name=/runtime.v1.ImageService/PullImage
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.288140466Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.637360307Z" level=info msg="Running pod sandbox: default/task-pv-pod-restore/POD" id=e2351509-e6ac-4bb8-aab9-fd0af94d447e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.637437167Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.654540435Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6 UID:cb137d24-d069-403e-b70b-de65661121e5 NetNS:/var/run/netns/d5e8f313-dba1-45bc-b3b2-c02a67eb064d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.654576355Z" level=info msg="Adding pod default_task-pv-pod-restore to CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.664387028Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6 UID:cb137d24-d069-403e-b70b-de65661121e5 NetNS:/var/run/netns/d5e8f313-dba1-45bc-b3b2-c02a67eb064d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.664545996Z" level=info msg="Checking pod default_task-pv-pod-restore for CNI network kindnet (type=ptp)"
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.685122188Z" level=info msg="Ran pod sandbox 5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6 with infra container: default/task-pv-pod-restore/POD" id=e2351509-e6ac-4bb8-aab9-fd0af94d447e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.686287821Z" level=info msg="Checking image status: docker.io/nginx:latest" id=f07397d0-b607-46d6-af9b-f03d66166f43 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.686567206Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7],Size_:190960382,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f07397d0-b607-46d6-af9b-f03d66166f43 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:13:28 addons-766826 crio[949]: time="2023-12-08 18:13:28.849772071Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	8a4a274d50e82       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             7 seconds ago        Exited              helper-pod                               0                   fee1922c34a24       helper-pod-delete-pvc-de77890f-3fa6-42c6-805e-20b83a22f899
	99ed7e1144505       docker.io/library/busybox@sha256:1780cb47b7dfbcbf1e511be1cdb62722bd0ce208b996ea199689b56892e15af9                                            12 seconds ago       Exited              busybox                                  0                   4824f9b991b5a       test-local-path
	cb23764e485dc       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            19 seconds ago       Exited              helper-pod                               0                   789f87d8edddc       helper-pod-create-pvc-de77890f-3fa6-42c6-805e-20b83a22f899
	0824500ec7067       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          43 seconds ago       Running             csi-snapshotter                          0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	232657ab7fabe       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          45 seconds ago       Running             csi-provisioner                          0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	bb14817ff5e35       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            46 seconds ago       Running             liveness-probe                           0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	6c8e158355e0f       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           47 seconds ago       Running             hostpath                                 0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	e79a22ccbe6e1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce                            49 seconds ago       Exited              gadget                                   3                   a640bace6f648       gadget-p6mlj
	4d28ad702fe9c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                49 seconds ago       Running             node-driver-registrar                    0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	d128959929e79       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 50 seconds ago       Running             gcp-auth                                 0                   f1ecc38efca75       gcp-auth-d4c87556c-rd4hl
	0c195e40e848b       registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5                             52 seconds ago       Running             controller                               0                   ba791fdc45604       ingress-nginx-controller-7c6974c4d8-nzbhz
	58c90aa01eff6       nvcr.io/nvidia/k8s-device-plugin@sha256:0153ba5eac2182064434f0101acce97ef512df59a32e1fbbdef12ca75c514e1e                                     58 seconds ago       Running             nvidia-device-plugin-ctr                 0                   4570c21b32b40       nvidia-device-plugin-daemonset-2vjv7
	aeb735aa29073       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   22904efbd29c3       snapshot-controller-58dbcc7b99-f7f7s
	f49e9eaa07a69       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   ff4233ce3feda       csi-hostpathplugin-nffnm
	e2d2bd47d9c7d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   bee499feb6ede       csi-hostpath-attacher-0
	48f29e1dfaa9d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   f97c8d0bee55c       csi-hostpath-resizer-0
	00bed82a83ec8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              patch                                    0                   4cfa4664d8fcf       ingress-nginx-admission-patch-gh9hw
	17a7564622f61       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   f86d34a67f09b       local-path-provisioner-78b46b4d5c-fwrj9
	da36ca698a199       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   87c7038a11867       snapshot-controller-58dbcc7b99-dnszh
	15cb33eff7a94       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   94b04541a264d       kube-ingress-dns-minikube
	9bfb3bf0f01ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   About a minute ago   Exited              create                                   0                   3f19abc9082a3       ingress-nginx-admission-create-4dfwg
	0afc54229499c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   f9a5a8d390c6d       storage-provisioner
	cbd9f355eab53       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   d8245fa45ba4a       coredns-5dd5756b68-gr7cp
	6603f43b0eb58       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                                             2 minutes ago        Running             kindnet-cni                              0                   c2310d083a1c1       kindnet-bdq5w
	2c809a9eebc06       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             2 minutes ago        Running             kube-proxy                               0                   21aa5178a09dc       kube-proxy-sqqhb
	c631bcea8eada       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago        Running             kube-apiserver                           0                   73a541f0f6c09       kube-apiserver-addons-766826
	cd9915ab51276       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago        Running             kube-controller-manager                  0                   74cc43c534dd2       kube-controller-manager-addons-766826
	4e07b412711cd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   7de1ce2d07c0e       etcd-addons-766826
	6499d11f12dc1       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago        Running             kube-scheduler                           0                   c8dfeda773c13       kube-scheduler-addons-766826
	
	* 
	* ==> coredns [cbd9f355eab53c9a47b524092bd0dd05b5e872ee76ca506b9aa10677fdcfce76] <==
	* [INFO] 10.244.0.16:41227 - 24001 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090011s
	[INFO] 10.244.0.16:54280 - 34011 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004204961s
	[INFO] 10.244.0.16:54280 - 5340 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00494698s
	[INFO] 10.244.0.16:56475 - 22769 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005108229s
	[INFO] 10.244.0.16:56475 - 29436 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006668247s
	[INFO] 10.244.0.16:50687 - 19693 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004716229s
	[INFO] 10.244.0.16:50687 - 29679 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006907537s
	[INFO] 10.244.0.16:51631 - 44046 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096578s
	[INFO] 10.244.0.16:51631 - 32272 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119066s
	[INFO] 10.244.0.20:48230 - 19832 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213524s
	[INFO] 10.244.0.20:56985 - 61725 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168435s
	[INFO] 10.244.0.20:42018 - 12395 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140347s
	[INFO] 10.244.0.20:55815 - 52735 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170507s
	[INFO] 10.244.0.20:46212 - 10641 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114661s
	[INFO] 10.244.0.20:35719 - 51155 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099156s
	[INFO] 10.244.0.20:43892 - 56512 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007038315s
	[INFO] 10.244.0.20:48795 - 56440 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007313715s
	[INFO] 10.244.0.20:54459 - 48778 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006889701s
	[INFO] 10.244.0.20:48816 - 23564 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007918809s
	[INFO] 10.244.0.20:55158 - 26318 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006514256s
	[INFO] 10.244.0.20:48460 - 59145 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007569835s
	[INFO] 10.244.0.20:43378 - 27475 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000637453s
	[INFO] 10.244.0.20:44992 - 5450 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000825741s
	[INFO] 10.244.0.24:39971 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000207037s
	[INFO] 10.244.0.24:48044 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157619s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-766826
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-766826
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=addons-766826
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_08T18_11_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-766826
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-766826"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-766826
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Dec 2023 18:13:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:13:12 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:13:12 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:13:12 +0000   Fri, 08 Dec 2023 18:11:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:13:12 +0000   Fri, 08 Dec 2023 18:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-766826
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 37a00760348a48c8ac7cc6f4c06da6dd
	  System UUID:                426938ba-0e3e-4298-85c5-a948711395ac
	  Boot ID:                    fbb3830a-6e88-496f-844f-172e564c45c3
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  gadget                      gadget-p6mlj                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  gcp-auth                    gcp-auth-d4c87556c-rd4hl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-nzbhz    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         2m
	  kube-system                 coredns-5dd5756b68-gr7cp                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 csi-hostpathplugin-nffnm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 etcd-addons-766826                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kindnet-bdq5w                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m6s
	  kube-system                 kube-apiserver-addons-766826                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-controller-manager-addons-766826        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-sqqhb                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-addons-766826                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 nvidia-device-plugin-daemonset-2vjv7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 snapshot-controller-58dbcc7b99-dnszh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 snapshot-controller-58dbcc7b99-f7f7s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fwrj9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             310Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m1s   kube-proxy       
	  Normal  Starting                 2m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m19s  kubelet          Node addons-766826 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s  kubelet          Node addons-766826 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s  kubelet          Node addons-766826 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m7s   node-controller  Node addons-766826 event: Registered Node addons-766826 in Controller
	  Normal  NodeReady                92s    kubelet          Node addons-766826 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 89 19 f7 d6 64 08 06
	[  +0.000322] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a c8 7d 06 52 14 08 06
	[ +19.454668] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 3c 81 cc 4b c6 08 06
	[Dec 8 17:23] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 80 46 7d b8 6b 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a 3c 81 cc 4b c6 08 06
	[ +21.754008] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev bridge
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca 5a 55 7e 00 fc 08 06
	[  +0.531613] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 5a 55 7e 00 fc 08 06
	[ +11.870355] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 ad 2b 90 71 c9 08 06
	[  +0.015014] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 80 af a2 6c d3 08 06
	[  +1.246435] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a e5 8d e2 3a 9d 08 06
	[  +0.000356] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5a 55 7e 00 fc 08 06
	[Dec 8 17:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 23 97 95 53 1e 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff fe 80 af a2 6c d3 08 06
	
	* 
	* ==> etcd [4e07b412711cd517b9db5ca157bcfb9f67d0e791c02d493cd768785f4d0c0065] <==
	* {"level":"info","ts":"2023-12-08T18:11:05.237084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-08T18:11:05.237549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2023-12-08T18:11:23.732642Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.560932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-12-08T18:11:23.732728Z","caller":"traceutil/trace.go:171","msg":"trace[198064214] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:343; }","duration":"103.666302ms","start":"2023-12-08T18:11:23.629047Z","end":"2023-12-08T18:11:23.732713Z","steps":["trace[198064214] 'range keys from in-memory index tree'  (duration: 103.458031ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:23.732891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.583332ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-766826\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-12-08T18:11:23.73298Z","caller":"traceutil/trace.go:171","msg":"trace[1989600869] range","detail":"{range_begin:/registry/minions/addons-766826; range_end:; response_count:1; response_revision:343; }","duration":"103.679827ms","start":"2023-12-08T18:11:23.629286Z","end":"2023-12-08T18:11:23.732966Z","steps":["trace[1989600869] 'range keys from in-memory index tree'  (duration: 103.501553ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733344Z","caller":"traceutil/trace.go:171","msg":"trace[256367974] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"103.791983ms","start":"2023-12-08T18:11:23.62954Z","end":"2023-12-08T18:11:23.733332Z","steps":["trace[256367974] 'process raft request'  (duration: 90.781279ms)","trace[256367974] 'compare'  (duration: 12.188541ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:23.733367Z","caller":"traceutil/trace.go:171","msg":"trace[1998711070] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"102.635603ms","start":"2023-12-08T18:11:23.630722Z","end":"2023-12-08T18:11:23.733357Z","steps":["trace[1998711070] 'process raft request'  (duration: 102.490478ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733515Z","caller":"traceutil/trace.go:171","msg":"trace[735671773] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"102.581799ms","start":"2023-12-08T18:11:23.630926Z","end":"2023-12-08T18:11:23.733508Z","steps":["trace[735671773] 'process raft request'  (duration: 102.338818ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:23.733523Z","caller":"traceutil/trace.go:171","msg":"trace[1525709568] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"102.503537ms","start":"2023-12-08T18:11:23.631012Z","end":"2023-12-08T18:11:23.733515Z","steps":["trace[1525709568] 'process raft request'  (duration: 102.290187ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:25.532937Z","caller":"traceutil/trace.go:171","msg":"trace[1960090481] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"109.397036ms","start":"2023-12-08T18:11:25.423521Z","end":"2023-12-08T18:11:25.532918Z","steps":["trace[1960090481] 'process raft request'  (duration: 97.364173ms)","trace[1960090481] 'compare'  (duration: 11.648341ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:25.820831Z","caller":"traceutil/trace.go:171","msg":"trace[712776121] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"198.312889ms","start":"2023-12-08T18:11:25.622497Z","end":"2023-12-08T18:11:25.820809Z","steps":["trace[712776121] 'process raft request'  (duration: 197.865601ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:25.925416Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-08T18:11:25.622445Z","time spent":"302.504195ms","remote":"127.0.0.1:38348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":197,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:212 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:134 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" > >"}
	{"level":"info","ts":"2023-12-08T18:11:26.134734Z","caller":"traceutil/trace.go:171","msg":"trace[1948602233] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"108.392024ms","start":"2023-12-08T18:11:26.026323Z","end":"2023-12-08T18:11:26.134715Z","steps":["trace[1948602233] 'process raft request'  (duration: 107.972381ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:26.240151Z","caller":"traceutil/trace.go:171","msg":"trace[943220108] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"103.852874ms","start":"2023-12-08T18:11:26.136278Z","end":"2023-12-08T18:11:26.240131Z","steps":["trace[943220108] 'process raft request'  (duration: 91.295321ms)","trace[943220108] 'compare'  (duration: 12.4531ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-08T18:11:26.535312Z","caller":"traceutil/trace.go:171","msg":"trace[1119255451] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"190.699483ms","start":"2023-12-08T18:11:26.344595Z","end":"2023-12-08T18:11:26.535295Z","steps":["trace[1119255451] 'process raft request'  (duration: 190.641831ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:26.535606Z","caller":"traceutil/trace.go:171","msg":"trace[173606981] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"191.228132ms","start":"2023-12-08T18:11:26.344366Z","end":"2023-12-08T18:11:26.535594Z","steps":["trace[173606981] 'process raft request'  (duration: 186.757384ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:28.037689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.076715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregated-metrics-reader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-08T18:11:28.037762Z","caller":"traceutil/trace.go:171","msg":"trace[905353659] range","detail":"{range_begin:/registry/clusterroles/system:aggregated-metrics-reader; range_end:; response_count:0; response_revision:473; }","duration":"100.157924ms","start":"2023-12-08T18:11:27.93759Z","end":"2023-12-08T18:11:28.037748Z","steps":["trace[905353659] 'agreement among raft nodes before linearized reading'  (duration: 100.058209ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:28.129125Z","caller":"traceutil/trace.go:171","msg":"trace[1743088610] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"105.817797ms","start":"2023-12-08T18:11:28.023279Z","end":"2023-12-08T18:11:28.129097Z","steps":["trace[1743088610] 'process raft request'  (duration: 102.895289ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:11:28.12988Z","caller":"traceutil/trace.go:171","msg":"trace[554597905] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"103.720153ms","start":"2023-12-08T18:11:28.026138Z","end":"2023-12-08T18:11:28.129858Z","steps":["trace[554597905] 'process raft request'  (duration: 102.489608ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-08T18:11:28.130318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.606337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4998"}
	{"level":"info","ts":"2023-12-08T18:11:28.139411Z","caller":"traceutil/trace.go:171","msg":"trace[488964075] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:480; }","duration":"116.698248ms","start":"2023-12-08T18:11:28.02269Z","end":"2023-12-08T18:11:28.139388Z","steps":["trace[488964075] 'agreement among raft nodes before linearized reading'  (duration: 107.574326ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:12:43.573328Z","caller":"traceutil/trace.go:171","msg":"trace[1020358742] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"129.689246ms","start":"2023-12-08T18:12:43.44361Z","end":"2023-12-08T18:12:43.5733Z","steps":["trace[1020358742] 'process raft request'  (duration: 129.448663ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-08T18:12:48.993426Z","caller":"traceutil/trace.go:171","msg":"trace[254635281] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"124.182441ms","start":"2023-12-08T18:12:48.869219Z","end":"2023-12-08T18:12:48.993401Z","steps":["trace[254635281] 'process raft request'  (duration: 61.399105ms)","trace[254635281] 'compare'  (duration: 62.649277ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [d128959929e7981c958c96e650660a47cfacb39887e367236929836936acc83f] <==
	* 2023/12/08 18:12:38 GCP Auth Webhook started!
	2023/12/08 18:13:07 Ready to marshal response ...
	2023/12/08 18:13:07 Ready to write response ...
	2023/12/08 18:13:07 Ready to marshal response ...
	2023/12/08 18:13:07 Ready to write response ...
	2023/12/08 18:13:09 Ready to marshal response ...
	2023/12/08 18:13:09 Ready to write response ...
	2023/12/08 18:13:17 Ready to marshal response ...
	2023/12/08 18:13:17 Ready to write response ...
	2023/12/08 18:13:17 Ready to marshal response ...
	2023/12/08 18:13:17 Ready to write response ...
	2023/12/08 18:13:20 Ready to marshal response ...
	2023/12/08 18:13:20 Ready to write response ...
	2023/12/08 18:13:27 Ready to marshal response ...
	2023/12/08 18:13:27 Ready to write response ...
	2023/12/08 18:13:28 Ready to marshal response ...
	2023/12/08 18:13:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:13:29 up  1:55,  0 users,  load average: 2.31, 1.26, 0.60
	Linux addons-766826 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6603f43b0eb58a7a2559575f859fc282b7fae3da76b1928e571bf69b181830a7] <==
	* podIP = 192.168.49.2
	I1208 18:11:26.437057       1 main.go:116] setting mtu 1500 for CNI 
	I1208 18:11:26.437111       1 main.go:146] kindnetd IP family: "ipv4"
	I1208 18:11:26.437161       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1208 18:11:57.450654       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1208 18:11:57.457618       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:11:57.457644       1 main.go:227] handling current node
	I1208 18:12:07.522588       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:07.522615       1 main.go:227] handling current node
	I1208 18:12:17.533993       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:17.534015       1 main.go:227] handling current node
	I1208 18:12:27.546380       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:27.546409       1 main.go:227] handling current node
	I1208 18:12:37.557679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:37.557704       1 main.go:227] handling current node
	I1208 18:12:47.569652       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:47.569675       1 main.go:227] handling current node
	I1208 18:12:57.581639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:12:57.581665       1 main.go:227] handling current node
	I1208 18:13:07.594589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:13:07.594629       1 main.go:227] handling current node
	I1208 18:13:17.598003       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:13:17.598027       1 main.go:227] handling current node
	I1208 18:13:27.610899       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:13:27.610930       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c631bcea8eada308b04044a3731c44a05d4f9ad77feac8eca89e1f3e9f5708ae] <==
	* I1208 18:11:30.254889       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.110.199.127"}
	I1208 18:11:30.261335       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1208 18:11:30.342427       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.110.207.43"}
	W1208 18:11:30.948672       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1208 18:11:31.731243       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.140.130"}
	W1208 18:11:57.630028       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	E1208 18:11:57.630068       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	W1208 18:11:57.630091       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	E1208 18:11:57.630117       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	W1208 18:11:57.655431       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	E1208 18:11:57.655470       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.140.130:443: connect: connection refused
	I1208 18:12:07.279068       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1208 18:12:20.756949       1 handler_proxy.go:93] no RequestInfo found in the context
	E1208 18:12:20.757020       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1208 18:12:20.757161       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.213.61:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.213.61:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.213.61:443: connect: connection refused
	I1208 18:12:20.757397       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 18:12:20.758791       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.213.61:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.213.61:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.213.61:443: connect: connection refused
	I1208 18:12:20.920097       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1208 18:13:07.323153       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 18:13:20.503235       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:60802: read: connection reset by peer
	I1208 18:13:23.034542       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1208 18:13:27.386933       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1208 18:13:27.609121       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.117.13"}
	
	* 
	* ==> kube-controller-manager [cd9915ab512769529c1f98a4accbb356503b011975213c918b3a38effc4f4763] <==
	* I1208 18:12:24.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="67.315µs"
	I1208 18:12:37.881629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="176.131µs"
	I1208 18:12:38.879401       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="6.567754ms"
	I1208 18:12:38.879492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="52.914µs"
	I1208 18:12:39.663578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="5.539141ms"
	I1208 18:12:39.663693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="65.835µs"
	I1208 18:12:42.013634       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1208 18:12:42.036453       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I1208 18:12:51.564566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.775554ms"
	I1208 18:12:51.564725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="63.438µs"
	I1208 18:12:52.008170       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1208 18:12:52.024600       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1208 18:13:07.511063       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1208 18:13:07.522876       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:07.523155       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:07.719067       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:07.719159       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:09.397949       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:12.743997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-5649c69bf6" duration="8.925µs"
	I1208 18:13:20.928412       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="11.966µs"
	I1208 18:13:21.223754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="6.608µs"
	I1208 18:13:22.603957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="8.834µs"
	I1208 18:13:25.487251       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1208 18:13:26.587643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="18.746µs"
	I1208 18:13:26.813548       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [2c809a9eebc0619d5c4cf67aad02b4761c191b35b4e62e0dd6e13b2c560e1946] <==
	* I1208 18:11:27.025246       1 server_others.go:69] "Using iptables proxy"
	I1208 18:11:27.245485       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1208 18:11:27.820291       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 18:11:27.823446       1 server_others.go:152] "Using iptables Proxier"
	I1208 18:11:27.823498       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 18:11:27.823509       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 18:11:27.823546       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 18:11:27.823763       1 server.go:846] "Version info" version="v1.28.4"
	I1208 18:11:27.823781       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 18:11:27.827532       1 config.go:188] "Starting service config controller"
	I1208 18:11:27.827707       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 18:11:27.827787       1 config.go:97] "Starting endpoint slice config controller"
	I1208 18:11:27.827822       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 18:11:27.828455       1 config.go:315] "Starting node config controller"
	I1208 18:11:27.832108       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 18:11:27.928562       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1208 18:11:27.928710       1 shared_informer.go:318] Caches are synced for service config
	I1208 18:11:27.933506       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6499d11f12dc1722f18acebd3bf10e4ea6d29fbe3fccb49a007b31275f13fb34] <==
	* E1208 18:11:07.439148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1208 18:11:07.438816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:11:07.439258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:11:07.438210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1208 18:11:07.439041       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1208 18:11:07.439287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1208 18:11:07.438571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1208 18:11:07.439315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1208 18:11:07.439379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 18:11:07.439397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1208 18:11:07.439474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:11:07.439485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:11:08.316137       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 18:11:08.316187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 18:11:08.339858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1208 18:11:08.339895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1208 18:11:08.406031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:11:08.406060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:11:08.433479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1208 18:11:08.433505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1208 18:11:08.477999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1208 18:11:08.478034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1208 18:11:08.604878       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 18:11:08.604925       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1208 18:11:10.934432       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 08 18:13:27 addons-766826 kubelet[1561]: I1208 18:13:27.569385    1561 memory_manager.go:346] "RemoveStaleState removing state" podUID="b8f1b3f2-b1f8-4fc3-a05d-64a5428c627c" containerName="helper-pod"
	Dec 08 18:13:27 addons-766826 kubelet[1561]: I1208 18:13:27.569392    1561 memory_manager.go:346] "RemoveStaleState removing state" podUID="3d046ecd-60ea-4e24-b441-8486e060b8f0" containerName="task-pv-container"
	Dec 08 18:13:27 addons-766826 kubelet[1561]: I1208 18:13:27.569399    1561 memory_manager.go:346] "RemoveStaleState removing state" podUID="831df691-d6e7-47e4-81c5-ec68788fcdb4" containerName="registry-proxy"
	Dec 08 18:13:27 addons-766826 kubelet[1561]: I1208 18:13:27.770139    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqp8\" (UniqueName: \"kubernetes.io/projected/2f43e8f3-b864-47fb-9ed0-22d1c06b4980-kube-api-access-kdqp8\") pod \"nginx\" (UID: \"2f43e8f3-b864-47fb-9ed0-22d1c06b4980\") " pod="default/nginx"
	Dec 08 18:13:27 addons-766826 kubelet[1561]: I1208 18:13:27.770218    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2f43e8f3-b864-47fb-9ed0-22d1c06b4980-gcp-creds\") pod \"nginx\" (UID: \"2f43e8f3-b864-47fb-9ed0-22d1c06b4980\") " pod="default/nginx"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.019745    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns22c\" (UniqueName: \"kubernetes.io/projected/96be6ea9-f7ed-447e-96f0-2de2852c5689-kube-api-access-ns22c\") pod \"96be6ea9-f7ed-447e-96f0-2de2852c5689\" (UID: \"96be6ea9-f7ed-447e-96f0-2de2852c5689\") "
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.019823    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/96be6ea9-f7ed-447e-96f0-2de2852c5689-tmp-dir\") pod \"96be6ea9-f7ed-447e-96f0-2de2852c5689\" (UID: \"96be6ea9-f7ed-447e-96f0-2de2852c5689\") "
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.020217    1561 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96be6ea9-f7ed-447e-96f0-2de2852c5689-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "96be6ea9-f7ed-447e-96f0-2de2852c5689" (UID: "96be6ea9-f7ed-447e-96f0-2de2852c5689"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.022810    1561 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96be6ea9-f7ed-447e-96f0-2de2852c5689-kube-api-access-ns22c" (OuterVolumeSpecName: "kube-api-access-ns22c") pod "96be6ea9-f7ed-447e-96f0-2de2852c5689" (UID: "96be6ea9-f7ed-447e-96f0-2de2852c5689"). InnerVolumeSpecName "kube-api-access-ns22c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.035273    1561 topology_manager.go:215] "Topology Admit Handler" podUID="cb137d24-d069-403e-b70b-de65661121e5" podNamespace="default" podName="task-pv-pod-restore"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: E1208 18:13:28.035373    1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96be6ea9-f7ed-447e-96f0-2de2852c5689" containerName="metrics-server"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.035422    1561 memory_manager.go:346] "RemoveStaleState removing state" podUID="96be6ea9-f7ed-447e-96f0-2de2852c5689" containerName="metrics-server"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.120264    1561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ns22c\" (UniqueName: \"kubernetes.io/projected/96be6ea9-f7ed-447e-96f0-2de2852c5689-kube-api-access-ns22c\") on node \"addons-766826\" DevicePath \"\""
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.120319    1561 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/96be6ea9-f7ed-447e-96f0-2de2852c5689-tmp-dir\") on node \"addons-766826\" DevicePath \"\""
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.141807    1561 scope.go:117] "RemoveContainer" containerID="8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.220644    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mqgv\" (UniqueName: \"kubernetes.io/projected/cb137d24-d069-403e-b70b-de65661121e5-kube-api-access-5mqgv\") pod \"task-pv-pod-restore\" (UID: \"cb137d24-d069-403e-b70b-de65661121e5\") " pod="default/task-pv-pod-restore"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.220697    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cb137d24-d069-403e-b70b-de65661121e5-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"cb137d24-d069-403e-b70b-de65661121e5\") " pod="default/task-pv-pod-restore"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.220742    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95f6c3de-0ea2-4812-b0cc-e7f8df9f022f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7b98ca49-95f5-11ee-8a54-4ac14576b5fe\") pod \"task-pv-pod-restore\" (UID: \"cb137d24-d069-403e-b70b-de65661121e5\") " pod="default/task-pv-pod-restore"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.221897    1561 scope.go:117] "RemoveContainer" containerID="8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: E1208 18:13:28.222439    1561 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049\": container with ID starting with 8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049 not found: ID does not exist" containerID="8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.222533    1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049"} err="failed to get container status \"8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049\": rpc error: code = NotFound desc = could not find container \"8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049\": container with ID starting with 8cbdeb9e912033d2617b4e7c4ce6df8ee6b27c991565512bf2eaa7ffaac13049 not found: ID does not exist"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.238682    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="96be6ea9-f7ed-447e-96f0-2de2852c5689" path="/var/lib/kubelet/pods/96be6ea9-f7ed-447e-96f0-2de2852c5689/volumes"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: W1208 18:13:28.280197    1561 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/crio-3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5 WatchSource:0}: Error finding container 3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5: Status 404 returned error can't find the container with id 3d1f216fb6c5b0fe4729be525858fac92f089f9469f8aa23d77c3d119b2451a5
	Dec 08 18:13:28 addons-766826 kubelet[1561]: I1208 18:13:28.326536    1561 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-95f6c3de-0ea2-4812-b0cc-e7f8df9f022f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7b98ca49-95f5-11ee-8a54-4ac14576b5fe\") pod \"task-pv-pod-restore\" (UID: \"cb137d24-d069-403e-b70b-de65661121e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/3c60e19d071149312149af3121703c93dd503ee3909325697bd19eefef527bc8/globalmount\"" pod="default/task-pv-pod-restore"
	Dec 08 18:13:28 addons-766826 kubelet[1561]: W1208 18:13:28.683369    1561 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/543daae92b3e6289e60b8e9b6a99ea708991667ce179ea56b5338acef735a788/crio-5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6 WatchSource:0}: Error finding container 5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6: Status 404 returned error can't find the container with id 5ba190589b0b951869565b5b3cc24f49568d3919d3efc4dd173746209b6e13b6
	
	* 
	* ==> storage-provisioner [0afc54229499cded21f7a7ff6d8237ce979555642822ab486ebb66c4fa43311a] <==
	* I1208 18:11:58.620374       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 18:11:58.630389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 18:11:58.630475       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 18:11:58.636266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 18:11:58.636366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a667e5e3-74ad-4bb9-9c4d-78582618c974", APIVersion:"v1", ResourceVersion:"882", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206 became leader
	I1208 18:11:58.636411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206!
	I1208 18:11:58.737200       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-766826_1d54b840-b3b3-4bdf-bbce-c1d4e718f206!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-766826 -n addons-766826
helpers_test.go:261: (dbg) Run:  kubectl --context addons-766826 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod-restore ingress-nginx-admission-create-4dfwg ingress-nginx-admission-patch-gh9hw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-766826 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-4dfwg ingress-nginx-admission-patch-gh9hw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-766826 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-4dfwg ingress-nginx-admission-patch-gh9hw: exit status 1 (116.924647ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-766826/192.168.49.2
	Start Time:       Fri, 08 Dec 2023 18:13:27 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kdqp8 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-kdqp8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/nginx to addons-766826
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-766826/192.168.49.2
	Start Time:       Fri, 08 Dec 2023 18:13:28 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5mqgv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-5mqgv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-766826
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4dfwg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gh9hw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-766826 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-4dfwg ingress-nginx-admission-patch-gh9hw: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (7.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-722179 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-722179 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.044377393s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-722179 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-722179 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ba68bc57-b088-402d-b3e6-fe4a7eceba78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ba68bc57-b088-402d-b3e6-fe4a7eceba78] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007713903s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1208 18:23:06.943661  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:23:34.630579  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-722179 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.562309776s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-722179 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.01124012s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons disable ingress-dns --alsologtostderr -v=1: (2.928128324s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons disable ingress --alsologtostderr -v=1
E1208 18:24:47.253504  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.258777  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.269029  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.289288  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.329552  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.409979  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.570403  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:47.891014  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:48.531675  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:24:49.812155  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons disable ingress --alsologtostderr -v=1: (7.400683467s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-722179
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-722179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c",
	        "Created": "2023-12-08T18:20:49.356512726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385076,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-08T18:20:49.633910713Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7e83e141d5f1084600bb5c7d15c9e2fd69083458051c2cf9d21dfd6243a0ff9b",
	        "ResolvConfPath": "/var/lib/docker/containers/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c/hosts",
	        "LogPath": "/var/lib/docker/containers/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c-json.log",
	        "Name": "/ingress-addon-legacy-722179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-722179:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-722179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc2ae4ed4f797047c44edc6f3007384369ea014f0675ef7494c8d472548ff771-init/diff:/var/lib/docker/overlay2/f01fd4b86350391aeb4ddce306a73284c32c8168179c226f9bf8857f27cbe54b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc2ae4ed4f797047c44edc6f3007384369ea014f0675ef7494c8d472548ff771/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc2ae4ed4f797047c44edc6f3007384369ea014f0675ef7494c8d472548ff771/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc2ae4ed4f797047c44edc6f3007384369ea014f0675ef7494c8d472548ff771/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-722179",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-722179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-722179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-722179",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-722179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f22b4b5afb615fc32ee3cd6f1a5747c2a13582110e70965b694c7db46ad9009",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3f22b4b5afb6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-722179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c5d49c9b500c",
	                        "ingress-addon-legacy-722179"
	                    ],
	                    "NetworkID": "c1240a66bc126f3d3c950077abe1fa44df224cc0dc1aee8d45550679276c4b25",
	                    "EndpointID": "e5f2a83d46d807d009583681ea28193639c3d4bb98e13e85227cc9881f7dbc0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-722179 -n ingress-addon-legacy-722179
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-722179 logs -n 25: (1.07007392s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-290514                                                   | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-290514 ssh findmnt                                          | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-290514                                                   | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-290514 ssh findmnt                                          | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-290514 ssh findmnt                                          | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-290514 ssh findmnt                                          | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-290514                                                   | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-290514 ssh pgrep                                            | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-290514 image build -t                                       | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | localhost/my-image:functional-290514                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-290514 image ls                                             | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	| image          | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-290514                                                      | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| delete         | -p functional-290514                                                   | functional-290514           | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:20 UTC |
	| start          | -p ingress-addon-legacy-722179                                         | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:20 UTC | 08 Dec 23 18:21 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-722179                                            | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:21 UTC | 08 Dec 23 18:21 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-722179                                            | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:21 UTC | 08 Dec 23 18:21 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-722179                                            | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:22 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-722179 ip                                         | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:24 UTC | 08 Dec 23 18:24 UTC |
	| addons         | ingress-addon-legacy-722179                                            | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:24 UTC | 08 Dec 23 18:24 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-722179                                            | ingress-addon-legacy-722179 | jenkins | v1.32.0 | 08 Dec 23 18:24 UTC | 08 Dec 23 18:24 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:20:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:20:31.954389  384440 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:20:31.954551  384440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:31.954560  384440 out.go:309] Setting ErrFile to fd 2...
	I1208 18:20:31.954565  384440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:31.954780  384440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:20:31.955386  384440 out.go:303] Setting JSON to false
	I1208 18:20:31.956795  384440 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7332,"bootTime":1702052300,"procs":551,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:20:31.956867  384440 start.go:138] virtualization: kvm guest
	I1208 18:20:31.959460  384440 out.go:177] * [ingress-addon-legacy-722179] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:20:31.961153  384440 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:20:31.961176  384440 notify.go:220] Checking for updates...
	I1208 18:20:31.962641  384440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:20:31.964157  384440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:20:31.965737  384440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:20:31.967082  384440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:20:31.968514  384440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:20:31.970044  384440 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:20:31.992754  384440 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:20:31.992852  384440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:20:32.043308  384440 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-08 18:20:32.034636976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:20:32.043450  384440 docker.go:295] overlay module found
	I1208 18:20:32.046247  384440 out.go:177] * Using the docker driver based on user configuration
	I1208 18:20:32.047724  384440 start.go:298] selected driver: docker
	I1208 18:20:32.047743  384440 start.go:902] validating driver "docker" against <nil>
	I1208 18:20:32.047754  384440 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:20:32.048604  384440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:20:32.101029  384440 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-08 18:20:32.092701531 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:20:32.101191  384440 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 18:20:32.101450  384440 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 18:20:32.103480  384440 out.go:177] * Using Docker driver with root privileges
	I1208 18:20:32.105062  384440 cni.go:84] Creating CNI manager for ""
	I1208 18:20:32.105084  384440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:20:32.105098  384440 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 18:20:32.105122  384440 start_flags.go:323] config:
	{Name:ingress-addon-legacy-722179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-722179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:20:32.106746  384440 out.go:177] * Starting control plane node ingress-addon-legacy-722179 in cluster ingress-addon-legacy-722179
	I1208 18:20:32.108268  384440 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:20:32.109711  384440 out.go:177] * Pulling base image ...
	I1208 18:20:32.111075  384440 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1208 18:20:32.111110  384440 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:20:32.126815  384440 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon, skipping pull
	I1208 18:20:32.126854  384440 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in daemon, skipping load
	I1208 18:20:32.213914  384440 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1208 18:20:32.213949  384440 cache.go:56] Caching tarball of preloaded images
	I1208 18:20:32.214162  384440 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1208 18:20:32.216251  384440 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1208 18:20:32.217680  384440 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:20:32.256265  384440 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1208 18:20:41.070242  384440 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:20:41.070378  384440 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:20:42.091428  384440 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1208 18:20:42.091803  384440 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/config.json ...
	I1208 18:20:42.091837  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/config.json: {Name:mk675c2a8ffcf604b00096d5e68bb09566d9892d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:42.092019  384440 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:20:42.092045  384440 start.go:365] acquiring machines lock for ingress-addon-legacy-722179: {Name:mke4e2c9f1190a2ae369f04ff269e68801370d58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:20:42.092088  384440 start.go:369] acquired machines lock for "ingress-addon-legacy-722179" in 30.847µs
	I1208 18:20:42.092106  384440 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-722179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-722179 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:20:42.092183  384440 start.go:125] createHost starting for "" (driver="docker")
	I1208 18:20:42.094683  384440 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1208 18:20:42.094897  384440 start.go:159] libmachine.API.Create for "ingress-addon-legacy-722179" (driver="docker")
	I1208 18:20:42.094927  384440 client.go:168] LocalClient.Create starting
	I1208 18:20:42.095003  384440 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem
	I1208 18:20:42.095033  384440 main.go:141] libmachine: Decoding PEM data...
	I1208 18:20:42.095050  384440 main.go:141] libmachine: Parsing certificate...
	I1208 18:20:42.095100  384440 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem
	I1208 18:20:42.095120  384440 main.go:141] libmachine: Decoding PEM data...
	I1208 18:20:42.095131  384440 main.go:141] libmachine: Parsing certificate...
	I1208 18:20:42.095444  384440 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-722179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 18:20:42.111420  384440 cli_runner.go:211] docker network inspect ingress-addon-legacy-722179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 18:20:42.111503  384440 network_create.go:281] running [docker network inspect ingress-addon-legacy-722179] to gather additional debugging logs...
	I1208 18:20:42.111525  384440 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-722179
	W1208 18:20:42.126880  384440 cli_runner.go:211] docker network inspect ingress-addon-legacy-722179 returned with exit code 1
	I1208 18:20:42.126925  384440 network_create.go:284] error running [docker network inspect ingress-addon-legacy-722179]: docker network inspect ingress-addon-legacy-722179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-722179 not found
	I1208 18:20:42.126939  384440 network_create.go:286] output of [docker network inspect ingress-addon-legacy-722179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-722179 not found
	
	** /stderr **
	I1208 18:20:42.127051  384440 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:20:42.142258  384440 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00237ab50}
	I1208 18:20:42.142373  384440 network_create.go:124] attempt to create docker network ingress-addon-legacy-722179 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 18:20:42.142468  384440 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-722179 ingress-addon-legacy-722179
	I1208 18:20:42.192700  384440 network_create.go:108] docker network ingress-addon-legacy-722179 192.168.49.0/24 created
	I1208 18:20:42.192736  384440 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-722179" container
	I1208 18:20:42.192794  384440 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 18:20:42.209841  384440 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-722179 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-722179 --label created_by.minikube.sigs.k8s.io=true
	I1208 18:20:42.226435  384440 oci.go:103] Successfully created a docker volume ingress-addon-legacy-722179
	I1208 18:20:42.226528  384440 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-722179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-722179 --entrypoint /usr/bin/test -v ingress-addon-legacy-722179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib
	I1208 18:20:43.996497  384440 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-722179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-722179 --entrypoint /usr/bin/test -v ingress-addon-legacy-722179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib: (1.769925943s)
	I1208 18:20:43.996532  384440 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-722179
	I1208 18:20:43.996552  384440 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1208 18:20:43.996582  384440 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 18:20:43.996647  384440 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-722179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 18:20:49.291003  384440 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-722179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.294313435s)
	I1208 18:20:49.291041  384440 kic.go:203] duration metric: took 5.294462 seconds to extract preloaded images to volume
	W1208 18:20:49.291180  384440 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 18:20:49.291265  384440 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 18:20:49.342078  384440 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-722179 --name ingress-addon-legacy-722179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-722179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-722179 --network ingress-addon-legacy-722179 --ip 192.168.49.2 --volume ingress-addon-legacy-722179:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:20:49.641875  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Running}}
	I1208 18:20:49.659889  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:20:49.678659  384440 cli_runner.go:164] Run: docker exec ingress-addon-legacy-722179 stat /var/lib/dpkg/alternatives/iptables
	I1208 18:20:49.738256  384440 oci.go:144] the created container "ingress-addon-legacy-722179" has a running status.
	I1208 18:20:49.738290  384440 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa...
	I1208 18:20:49.793807  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1208 18:20:49.793862  384440 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 18:20:49.813574  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:20:49.831166  384440 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 18:20:49.831189  384440 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-722179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 18:20:49.908891  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:20:49.926593  384440 machine.go:88] provisioning docker machine ...
	I1208 18:20:49.926640  384440 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-722179"
	I1208 18:20:49.926708  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:49.947706  384440 main.go:141] libmachine: Using SSH client type: native
	I1208 18:20:49.948307  384440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1208 18:20:49.948341  384440 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-722179 && echo "ingress-addon-legacy-722179" | sudo tee /etc/hostname
	I1208 18:20:49.949067  384440 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47586->127.0.0.1:33089: read: connection reset by peer
	I1208 18:20:53.080522  384440 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-722179
	
	I1208 18:20:53.080598  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:53.096860  384440 main.go:141] libmachine: Using SSH client type: native
	I1208 18:20:53.097207  384440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1208 18:20:53.097229  384440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-722179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-722179/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-722179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:20:53.218833  384440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:20:53.218866  384440 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:20:53.218893  384440 ubuntu.go:177] setting up certificates
	I1208 18:20:53.218921  384440 provision.go:83] configureAuth start
	I1208 18:20:53.218991  384440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-722179
	I1208 18:20:53.235255  384440 provision.go:138] copyHostCerts
	I1208 18:20:53.235304  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:20:53.235335  384440 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem, removing ...
	I1208 18:20:53.235344  384440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:20:53.235406  384440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:20:53.235481  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:20:53.235499  384440 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem, removing ...
	I1208 18:20:53.235505  384440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:20:53.235536  384440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:20:53.235579  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:20:53.235597  384440 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem, removing ...
	I1208 18:20:53.235603  384440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:20:53.235623  384440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:20:53.235666  384440 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-722179 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-722179]
	I1208 18:20:53.376891  384440 provision.go:172] copyRemoteCerts
	I1208 18:20:53.376952  384440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:20:53.376992  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:53.394266  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:20:53.483085  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 18:20:53.483163  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:20:53.504342  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 18:20:53.504405  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1208 18:20:53.525270  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 18:20:53.525324  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 18:20:53.545791  384440 provision.go:86] duration metric: configureAuth took 326.849032ms
	I1208 18:20:53.545824  384440 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:20:53.546003  384440 config.go:182] Loaded profile config "ingress-addon-legacy-722179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1208 18:20:53.546138  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:53.562141  384440 main.go:141] libmachine: Using SSH client type: native
	I1208 18:20:53.562601  384440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1208 18:20:53.562629  384440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:20:53.792524  384440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:20:53.792549  384440 machine.go:91] provisioned docker machine in 3.865926297s
	I1208 18:20:53.792572  384440 client.go:171] LocalClient.Create took 11.697627349s
	I1208 18:20:53.792590  384440 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-722179" took 11.697694178s
	I1208 18:20:53.792605  384440 start.go:300] post-start starting for "ingress-addon-legacy-722179" (driver="docker")
	I1208 18:20:53.792617  384440 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:20:53.792676  384440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:20:53.792715  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:53.809556  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:20:53.899209  384440 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:20:53.902398  384440 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:20:53.902436  384440 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:20:53.902444  384440 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:20:53.902471  384440 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1208 18:20:53.902486  384440 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:20:53.902539  384440 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:20:53.902619  384440 filesync.go:149] local asset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> 3436282.pem in /etc/ssl/certs
	I1208 18:20:53.902632  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /etc/ssl/certs/3436282.pem
	I1208 18:20:53.902725  384440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 18:20:53.910440  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:20:53.932519  384440 start.go:303] post-start completed in 139.896186ms
	I1208 18:20:53.932840  384440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-722179
	I1208 18:20:53.948900  384440 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/config.json ...
	I1208 18:20:53.949165  384440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:20:53.949211  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:53.965604  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:20:54.051217  384440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:20:54.055322  384440 start.go:128] duration metric: createHost completed in 11.963124482s
	I1208 18:20:54.055346  384440 start.go:83] releasing machines lock for "ingress-addon-legacy-722179", held for 11.96324764s
	I1208 18:20:54.055410  384440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-722179
	I1208 18:20:54.071638  384440 ssh_runner.go:195] Run: cat /version.json
	I1208 18:20:54.071690  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:54.071746  384440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:20:54.071817  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:20:54.088882  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:20:54.089514  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:20:54.176168  384440 ssh_runner.go:195] Run: systemctl --version
	I1208 18:20:54.264228  384440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:20:54.402680  384440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:20:54.406922  384440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:20:54.424182  384440 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:20:54.424275  384440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:20:54.450361  384440 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1208 18:20:54.450392  384440 start.go:475] detecting cgroup driver to use...
	I1208 18:20:54.450425  384440 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:20:54.450491  384440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:20:54.463633  384440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:20:54.473250  384440 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:20:54.473322  384440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:20:54.485932  384440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:20:54.498370  384440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 18:20:54.580693  384440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:20:54.664153  384440 docker.go:219] disabling docker service ...
	I1208 18:20:54.664214  384440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:20:54.681440  384440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:20:54.691944  384440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:20:54.767621  384440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:20:54.844527  384440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:20:54.854484  384440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:20:54.868478  384440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1208 18:20:54.868532  384440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:20:54.877027  384440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 18:20:54.877087  384440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:20:54.885837  384440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:20:54.894255  384440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:20:54.902392  384440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 18:20:54.909959  384440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 18:20:54.917210  384440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 18:20:54.924995  384440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 18:20:54.998574  384440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 18:20:55.101017  384440 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 18:20:55.101073  384440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 18:20:55.104502  384440 start.go:543] Will wait 60s for crictl version
	I1208 18:20:55.104546  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:55.107488  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 18:20:55.139038  384440 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1208 18:20:55.139103  384440 ssh_runner.go:195] Run: crio --version
	I1208 18:20:55.171844  384440 ssh_runner.go:195] Run: crio --version
	I1208 18:20:55.206234  384440 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1208 18:20:55.207703  384440 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-722179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:20:55.223981  384440 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 18:20:55.227565  384440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:20:55.237630  384440 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1208 18:20:55.237683  384440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:20:55.281089  384440 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1208 18:20:55.281147  384440 ssh_runner.go:195] Run: which lz4
	I1208 18:20:55.284376  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1208 18:20:55.284470  384440 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1208 18:20:55.287349  384440 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 18:20:55.287373  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1208 18:20:56.220926  384440 crio.go:444] Took 0.936482 seconds to copy over tarball
	I1208 18:20:56.221017  384440 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 18:20:58.426441  384440 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.205385484s)
	I1208 18:20:58.426490  384440 crio.go:451] Took 2.205529 seconds to extract the tarball
	I1208 18:20:58.426502  384440 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 18:20:58.498720  384440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:20:58.529893  384440 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1208 18:20:58.529918  384440 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1208 18:20:58.529995  384440 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:20:58.530020  384440 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1208 18:20:58.530041  384440 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1208 18:20:58.530073  384440 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1208 18:20:58.529996  384440 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1208 18:20:58.530020  384440 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1208 18:20:58.530020  384440 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1208 18:20:58.530044  384440 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1208 18:20:58.531997  384440 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1208 18:20:58.532006  384440 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1208 18:20:58.532021  384440 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1208 18:20:58.532054  384440 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:20:58.532080  384440 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1208 18:20:58.532079  384440 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1208 18:20:58.532124  384440 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1208 18:20:58.532461  384440 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1208 18:20:58.704246  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1208 18:20:58.712730  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1208 18:20:58.718329  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1208 18:20:58.740701  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:20:58.740785  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1208 18:20:58.741567  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1208 18:20:58.747782  384440 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1208 18:20:58.747826  384440 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1208 18:20:58.747864  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.757628  384440 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1208 18:20:58.757677  384440 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1208 18:20:58.757718  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.761624  384440 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1208 18:20:58.761666  384440 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1208 18:20:58.761705  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.772730  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1208 18:20:58.785947  384440 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1208 18:20:58.846191  384440 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1208 18:20:58.846245  384440 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1208 18:20:58.846290  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.945496  384440 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1208 18:20:58.945541  384440 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1208 18:20:58.945574  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.945580  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1208 18:20:58.945645  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1208 18:20:58.945680  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1208 18:20:58.945727  384440 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1208 18:20:58.945754  384440 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1208 18:20:58.945760  384440 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1208 18:20:58.945784  384440 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1208 18:20:58.945789  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.945825  384440 ssh_runner.go:195] Run: which crictl
	I1208 18:20:58.945789  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1208 18:20:59.021596  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1208 18:20:59.026652  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1208 18:20:59.026717  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1208 18:20:59.026737  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1208 18:20:59.026783  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1208 18:20:59.026844  384440 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1208 18:20:59.026858  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1208 18:20:59.062056  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1208 18:20:59.063357  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1208 18:20:59.063413  384440 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1208 18:20:59.063457  384440 cache_images.go:92] LoadImages completed in 533.527353ms
	W1208 18:20:59.063523  384440 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1208 18:20:59.063584  384440 ssh_runner.go:195] Run: crio config
	I1208 18:20:59.136789  384440 cni.go:84] Creating CNI manager for ""
	I1208 18:20:59.136814  384440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:20:59.136844  384440 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1208 18:20:59.136871  384440 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-722179 NodeName:ingress-addon-legacy-722179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1208 18:20:59.137046  384440 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-722179"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 18:20:59.137139  384440 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-722179 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-722179 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1208 18:20:59.137209  384440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1208 18:20:59.145285  384440 binaries.go:44] Found k8s binaries, skipping transfer
	I1208 18:20:59.145382  384440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 18:20:59.152772  384440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1208 18:20:59.167614  384440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1208 18:20:59.183587  384440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1208 18:20:59.198317  384440 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 18:20:59.201290  384440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:20:59.210514  384440 certs.go:56] Setting up /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179 for IP: 192.168.49.2
	I1208 18:20:59.210575  384440 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5abf3d3db90d2494e2d623a52fec5b2843f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.210738  384440 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key
	I1208 18:20:59.210789  384440 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key
	I1208 18:20:59.210866  384440 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key
	I1208 18:20:59.210883  384440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt with IP's: []
	I1208 18:20:59.420493  384440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt ...
	I1208 18:20:59.420537  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: {Name:mkf7325f169ae40090758a2abc756708c24b1b55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.420755  384440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key ...
	I1208 18:20:59.420779  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key: {Name:mk1255e9b695577fc021dad93930ef43a6545b7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.420904  384440 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key.dd3b5fb2
	I1208 18:20:59.420928  384440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1208 18:20:59.605309  384440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt.dd3b5fb2 ...
	I1208 18:20:59.605345  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt.dd3b5fb2: {Name:mkfe47e30dcb5906636d06caaaa55e86dd32f611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.605538  384440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key.dd3b5fb2 ...
	I1208 18:20:59.605562  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key.dd3b5fb2: {Name:mk40d960a85636af0acf492a9c1bfcbbca7846f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.605675  384440 certs.go:337] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt
	I1208 18:20:59.605778  384440 certs.go:341] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key
	I1208 18:20:59.605863  384440 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.key
	I1208 18:20:59.605883  384440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.crt with IP's: []
	I1208 18:20:59.750813  384440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.crt ...
	I1208 18:20:59.750853  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.crt: {Name:mk1ac9ecf8436e1b7d7e23b416942de4efa12e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.751056  384440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.key ...
	I1208 18:20:59.751074  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.key: {Name:mkf0b75f166e1128395fbf728ad0787b330a7e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:20:59.751170  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 18:20:59.751199  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 18:20:59.751214  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 18:20:59.751231  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 18:20:59.751246  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 18:20:59.751265  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 18:20:59.751283  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 18:20:59.751302  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 18:20:59.751371  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem (1338 bytes)
	W1208 18:20:59.751425  384440 certs.go:433] ignoring /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628_empty.pem, impossibly tiny 0 bytes
	I1208 18:20:59.751441  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 18:20:59.751477  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem (1082 bytes)
	I1208 18:20:59.751522  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem (1123 bytes)
	I1208 18:20:59.751565  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem (1679 bytes)
	I1208 18:20:59.751625  384440 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:20:59.751675  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:20:59.751703  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem -> /usr/share/ca-certificates/343628.pem
	I1208 18:20:59.751721  384440 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /usr/share/ca-certificates/3436282.pem
	I1208 18:20:59.752334  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1208 18:20:59.773813  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 18:20:59.793966  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 18:20:59.815014  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 18:20:59.835601  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 18:20:59.856924  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 18:20:59.878595  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 18:20:59.899435  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 18:20:59.920550  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 18:20:59.941766  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem --> /usr/share/ca-certificates/343628.pem (1338 bytes)
	I1208 18:20:59.963257  384440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /usr/share/ca-certificates/3436282.pem (1708 bytes)
	I1208 18:20:59.984552  384440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 18:21:00.000075  384440 ssh_runner.go:195] Run: openssl version
	I1208 18:21:00.005054  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1208 18:21:00.013426  384440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:21:00.016550  384440 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:21:00.016604  384440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:21:00.022748  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1208 18:21:00.031059  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/343628.pem && ln -fs /usr/share/ca-certificates/343628.pem /etc/ssl/certs/343628.pem"
	I1208 18:21:00.039305  384440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/343628.pem
	I1208 18:21:00.042352  384440 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  8 18:17 /usr/share/ca-certificates/343628.pem
	I1208 18:21:00.042418  384440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/343628.pem
	I1208 18:21:00.048559  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/343628.pem /etc/ssl/certs/51391683.0"
	I1208 18:21:00.056674  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3436282.pem && ln -fs /usr/share/ca-certificates/3436282.pem /etc/ssl/certs/3436282.pem"
	I1208 18:21:00.065212  384440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3436282.pem
	I1208 18:21:00.068207  384440 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  8 18:17 /usr/share/ca-certificates/3436282.pem
	I1208 18:21:00.068261  384440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3436282.pem
	I1208 18:21:00.074601  384440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3436282.pem /etc/ssl/certs/3ec20f2e.0"
	I1208 18:21:00.082658  384440 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1208 18:21:00.085426  384440 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:21:00.085477  384440 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-722179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-722179 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:21:00.085575  384440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 18:21:00.085635  384440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 18:21:00.118706  384440 cri.go:89] found id: ""
	I1208 18:21:00.118772  384440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 18:21:00.127476  384440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 18:21:00.135551  384440 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1208 18:21:00.135599  384440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 18:21:00.143659  384440 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 18:21:00.143701  384440 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 18:21:00.186021  384440 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1208 18:21:00.186094  384440 kubeadm.go:322] [preflight] Running pre-flight checks
	I1208 18:21:00.223529  384440 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:21:00.223594  384440 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:21:00.223624  384440 kubeadm.go:322] OS: Linux
	I1208 18:21:00.223675  384440 kubeadm.go:322] CGROUPS_CPU: enabled
	I1208 18:21:00.223751  384440 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1208 18:21:00.223839  384440 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1208 18:21:00.223917  384440 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1208 18:21:00.223986  384440 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1208 18:21:00.224054  384440 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1208 18:21:00.289761  384440 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 18:21:00.289900  384440 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 18:21:00.290009  384440 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 18:21:00.468499  384440 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:21:00.469432  384440 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:21:00.469496  384440 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1208 18:21:00.543171  384440 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 18:21:00.546209  384440 out.go:204]   - Generating certificates and keys ...
	I1208 18:21:00.546333  384440 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1208 18:21:00.546443  384440 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1208 18:21:00.801118  384440 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 18:21:00.951836  384440 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1208 18:21:01.260537  384440 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1208 18:21:01.341062  384440 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1208 18:21:01.414192  384440 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1208 18:21:01.414386  384440 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-722179 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:21:01.544096  384440 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1208 18:21:01.544269  384440 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-722179 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 18:21:01.629739  384440 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 18:21:01.789488  384440 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 18:21:01.988227  384440 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1208 18:21:01.988363  384440 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 18:21:02.152466  384440 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 18:21:02.313392  384440 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 18:21:02.416832  384440 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 18:21:02.738132  384440 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 18:21:02.738928  384440 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 18:21:02.741103  384440 out.go:204]   - Booting up control plane ...
	I1208 18:21:02.741193  384440 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 18:21:02.745707  384440 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 18:21:02.746785  384440 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 18:21:02.747690  384440 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 18:21:02.749599  384440 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 18:21:09.252433  384440 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502769 seconds
	I1208 18:21:09.252607  384440 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 18:21:09.263670  384440 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 18:21:09.779758  384440 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 18:21:09.779916  384440 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-722179 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1208 18:21:10.289212  384440 kubeadm.go:322] [bootstrap-token] Using token: tv7cwx.3n85w9ybzf5uww9f
	I1208 18:21:10.290790  384440 out.go:204]   - Configuring RBAC rules ...
	I1208 18:21:10.290947  384440 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 18:21:10.293894  384440 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 18:21:10.299516  384440 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 18:21:10.301210  384440 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 18:21:10.302968  384440 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 18:21:10.304524  384440 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 18:21:10.310682  384440 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 18:21:10.536921  384440 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1208 18:21:10.700459  384440 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1208 18:21:10.701409  384440 kubeadm.go:322] 
	I1208 18:21:10.701475  384440 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1208 18:21:10.701483  384440 kubeadm.go:322] 
	I1208 18:21:10.701545  384440 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1208 18:21:10.701552  384440 kubeadm.go:322] 
	I1208 18:21:10.701573  384440 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1208 18:21:10.701647  384440 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 18:21:10.701700  384440 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 18:21:10.701726  384440 kubeadm.go:322] 
	I1208 18:21:10.701799  384440 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1208 18:21:10.701905  384440 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 18:21:10.701964  384440 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 18:21:10.701978  384440 kubeadm.go:322] 
	I1208 18:21:10.702099  384440 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 18:21:10.702178  384440 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1208 18:21:10.702189  384440 kubeadm.go:322] 
	I1208 18:21:10.702306  384440 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tv7cwx.3n85w9ybzf5uww9f \
	I1208 18:21:10.702465  384440 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 \
	I1208 18:21:10.702498  384440 kubeadm.go:322]     --control-plane 
	I1208 18:21:10.702507  384440 kubeadm.go:322] 
	I1208 18:21:10.702586  384440 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1208 18:21:10.702597  384440 kubeadm.go:322] 
	I1208 18:21:10.702662  384440 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tv7cwx.3n85w9ybzf5uww9f \
	I1208 18:21:10.702748  384440 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:21:10.704668  384440 kubeadm.go:322] W1208 18:21:00.185572    1381 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1208 18:21:10.704848  384440 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:21:10.704932  384440 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:21:10.705040  384440 kubeadm.go:322] W1208 18:21:02.745440    1381 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1208 18:21:10.705140  384440 kubeadm.go:322] W1208 18:21:02.746579    1381 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1208 18:21:10.705174  384440 cni.go:84] Creating CNI manager for ""
	I1208 18:21:10.705192  384440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:21:10.706965  384440 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1208 18:21:10.708351  384440 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 18:21:10.712207  384440 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1208 18:21:10.712225  384440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1208 18:21:10.728114  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 18:21:11.123805  384440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 18:21:11.123883  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:11.123913  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b minikube.k8s.io/name=ingress-addon-legacy-722179 minikube.k8s.io/updated_at=2023_12_08T18_21_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:11.131138  384440 ops.go:34] apiserver oom_adj: -16
	I1208 18:21:11.224786  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:11.289068  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:11.857865  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:12.357568  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:12.857763  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:13.358324  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:13.858078  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:14.357860  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:14.857492  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:15.358204  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:15.858320  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:16.357314  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:16.857812  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:17.357859  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:17.857555  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:18.357655  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:18.857543  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:19.357888  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:19.857334  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:20.358252  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:20.858340  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:21.357874  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:21.858041  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:22.358188  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:22.858022  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:23.358359  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:23.858242  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:24.358274  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:24.857765  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:25.357712  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:25.857871  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:26.357441  384440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:21:26.459170  384440 kubeadm.go:1088] duration metric: took 15.335354457s to wait for elevateKubeSystemPrivileges.
	I1208 18:21:26.459207  384440 kubeadm.go:406] StartCluster complete in 26.373738191s
	I1208 18:21:26.459230  384440 settings.go:142] acquiring lock: {Name:mkb1d8fbfd540ec0ff42a4ec77782a6addbbad21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:21:26.459309  384440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:21:26.460231  384440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/kubeconfig: {Name:mk170d1df5bab3a276f3bc17a718825dd5b16d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:21:26.460982  384440 kapi.go:59] client config for ingress-addon-legacy-722179: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:21:26.462360  384440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 18:21:26.462675  384440 config.go:182] Loaded profile config "ingress-addon-legacy-722179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1208 18:21:26.462737  384440 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1208 18:21:26.462818  384440 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-722179"
	I1208 18:21:26.462831  384440 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-722179"
	I1208 18:21:26.462841  384440 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-722179"
	I1208 18:21:26.462856  384440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-722179"
	I1208 18:21:26.462886  384440 host.go:66] Checking if "ingress-addon-legacy-722179" exists ...
	I1208 18:21:26.463249  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:21:26.463401  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:21:26.462818  384440 cert_rotation.go:137] Starting client certificate rotation controller
	I1208 18:21:26.481552  384440 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:21:26.480150  384440 kapi.go:59] client config for ingress-addon-legacy-722179: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:21:26.483138  384440 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:21:26.483160  384440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 18:21:26.483215  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:21:26.483221  384440 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-722179"
	I1208 18:21:26.483253  384440 host.go:66] Checking if "ingress-addon-legacy-722179" exists ...
	I1208 18:21:26.483701  384440 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-722179 --format={{.State.Status}}
	I1208 18:21:26.499838  384440 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 18:21:26.499865  384440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 18:21:26.499925  384440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-722179
	I1208 18:21:26.500686  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:21:26.515419  384440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/ingress-addon-legacy-722179/id_rsa Username:docker}
	I1208 18:21:26.529963  384440 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-722179" context rescaled to 1 replicas
	I1208 18:21:26.530014  384440 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:21:26.531965  384440 out.go:177] * Verifying Kubernetes components...
	I1208 18:21:26.533905  384440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:21:26.725828  384440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 18:21:26.726539  384440 kapi.go:59] client config for ingress-addon-legacy-722179: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:21:26.726906  384440 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-722179" to be "Ready" ...
	I1208 18:21:26.739626  384440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 18:21:26.739932  384440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:21:27.138561  384440 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1208 18:21:27.250099  384440 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1208 18:21:27.251947  384440 addons.go:502] enable addons completed in 789.201854ms: enabled=[default-storageclass storage-provisioner]
	I1208 18:21:28.736786  384440 node_ready.go:58] node "ingress-addon-legacy-722179" has status "Ready":"False"
	I1208 18:21:30.737291  384440 node_ready.go:58] node "ingress-addon-legacy-722179" has status "Ready":"False"
	I1208 18:21:31.237809  384440 node_ready.go:49] node "ingress-addon-legacy-722179" has status "Ready":"True"
	I1208 18:21:31.237835  384440 node_ready.go:38] duration metric: took 4.510900782s waiting for node "ingress-addon-legacy-722179" to be "Ready" ...
	I1208 18:21:31.237845  384440 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:21:31.244973  384440 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-d79fj" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:33.253187  384440 pod_ready.go:102] pod "coredns-66bff467f8-d79fj" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-08 18:21:26 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1208 18:21:35.754749  384440 pod_ready.go:102] pod "coredns-66bff467f8-d79fj" in "kube-system" namespace has status "Ready":"False"
	I1208 18:21:37.755647  384440 pod_ready.go:102] pod "coredns-66bff467f8-d79fj" in "kube-system" namespace has status "Ready":"False"
	I1208 18:21:40.255668  384440 pod_ready.go:102] pod "coredns-66bff467f8-d79fj" in "kube-system" namespace has status "Ready":"False"
	I1208 18:21:42.754999  384440 pod_ready.go:92] pod "coredns-66bff467f8-d79fj" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:42.755024  384440 pod_ready.go:81] duration metric: took 11.510026255s waiting for pod "coredns-66bff467f8-d79fj" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.755034  384440 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.759516  384440 pod_ready.go:92] pod "etcd-ingress-addon-legacy-722179" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:42.759533  384440 pod_ready.go:81] duration metric: took 4.492956ms waiting for pod "etcd-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.759545  384440 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.763657  384440 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-722179" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:42.763681  384440 pod_ready.go:81] duration metric: took 4.129454ms waiting for pod "kube-apiserver-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.763694  384440 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.767551  384440 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-722179" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:42.767566  384440 pod_ready.go:81] duration metric: took 3.865438ms waiting for pod "kube-controller-manager-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.767574  384440 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2zcdd" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.771363  384440 pod_ready.go:92] pod "kube-proxy-2zcdd" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:42.771380  384440 pod_ready.go:81] duration metric: took 3.800465ms waiting for pod "kube-proxy-2zcdd" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.771390  384440 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:42.950826  384440 request.go:629] Waited for 179.346508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-722179
	I1208 18:21:43.149846  384440 request.go:629] Waited for 196.292224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-722179
	I1208 18:21:43.152490  384440 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-722179" in "kube-system" namespace has status "Ready":"True"
	I1208 18:21:43.152513  384440 pod_ready.go:81] duration metric: took 381.11441ms waiting for pod "kube-scheduler-ingress-addon-legacy-722179" in "kube-system" namespace to be "Ready" ...
	I1208 18:21:43.152524  384440 pod_ready.go:38] duration metric: took 11.91465596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:21:43.152543  384440 api_server.go:52] waiting for apiserver process to appear ...
	I1208 18:21:43.152597  384440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 18:21:43.163392  384440 api_server.go:72] duration metric: took 16.633341032s to wait for apiserver process to appear ...
	I1208 18:21:43.163417  384440 api_server.go:88] waiting for apiserver healthz status ...
	I1208 18:21:43.163433  384440 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1208 18:21:43.168118  384440 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1208 18:21:43.169120  384440 api_server.go:141] control plane version: v1.18.20
	I1208 18:21:43.169144  384440 api_server.go:131] duration metric: took 5.721848ms to wait for apiserver health ...
	I1208 18:21:43.169153  384440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 18:21:43.350523  384440 request.go:629] Waited for 181.296761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:21:43.355839  384440 system_pods.go:59] 8 kube-system pods found
	I1208 18:21:43.355873  384440 system_pods.go:61] "coredns-66bff467f8-d79fj" [c3cac2be-8bc9-438f-a586-b0d63343c550] Running
	I1208 18:21:43.355880  384440 system_pods.go:61] "etcd-ingress-addon-legacy-722179" [227a3b29-30f4-4f1a-885e-06269491cf68] Running
	I1208 18:21:43.355884  384440 system_pods.go:61] "kindnet-q7zdj" [6c1aae3f-c629-42d2-9794-508f402cf193] Running
	I1208 18:21:43.355888  384440 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-722179" [f80cee56-b3aa-430e-98bb-4c6a41fad23f] Running
	I1208 18:21:43.355892  384440 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-722179" [42b71003-7bfc-4377-b91d-3b7aa8a66f63] Running
	I1208 18:21:43.355897  384440 system_pods.go:61] "kube-proxy-2zcdd" [7f3de139-4199-40c4-ba57-e11ee6c17e3f] Running
	I1208 18:21:43.355904  384440 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-722179" [e831bbd6-bff6-4f53-a141-bf8bf640a3d0] Running
	I1208 18:21:43.355908  384440 system_pods.go:61] "storage-provisioner" [74a5d0a8-0337-4f98-af94-76c707b514ea] Running
	I1208 18:21:43.355914  384440 system_pods.go:74] duration metric: took 186.755738ms to wait for pod list to return data ...
	I1208 18:21:43.355925  384440 default_sa.go:34] waiting for default service account to be created ...
	I1208 18:21:43.550346  384440 request.go:629] Waited for 194.352955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1208 18:21:43.552748  384440 default_sa.go:45] found service account: "default"
	I1208 18:21:43.552776  384440 default_sa.go:55] duration metric: took 196.842365ms for default service account to be created ...
	I1208 18:21:43.552785  384440 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 18:21:43.750226  384440 request.go:629] Waited for 197.350089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:21:43.756047  384440 system_pods.go:86] 8 kube-system pods found
	I1208 18:21:43.756075  384440 system_pods.go:89] "coredns-66bff467f8-d79fj" [c3cac2be-8bc9-438f-a586-b0d63343c550] Running
	I1208 18:21:43.756080  384440 system_pods.go:89] "etcd-ingress-addon-legacy-722179" [227a3b29-30f4-4f1a-885e-06269491cf68] Running
	I1208 18:21:43.756084  384440 system_pods.go:89] "kindnet-q7zdj" [6c1aae3f-c629-42d2-9794-508f402cf193] Running
	I1208 18:21:43.756089  384440 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-722179" [f80cee56-b3aa-430e-98bb-4c6a41fad23f] Running
	I1208 18:21:43.756093  384440 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-722179" [42b71003-7bfc-4377-b91d-3b7aa8a66f63] Running
	I1208 18:21:43.756101  384440 system_pods.go:89] "kube-proxy-2zcdd" [7f3de139-4199-40c4-ba57-e11ee6c17e3f] Running
	I1208 18:21:43.756105  384440 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-722179" [e831bbd6-bff6-4f53-a141-bf8bf640a3d0] Running
	I1208 18:21:43.756109  384440 system_pods.go:89] "storage-provisioner" [74a5d0a8-0337-4f98-af94-76c707b514ea] Running
	I1208 18:21:43.756116  384440 system_pods.go:126] duration metric: took 203.325914ms to wait for k8s-apps to be running ...
	I1208 18:21:43.756130  384440 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 18:21:43.756175  384440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:21:43.767314  384440 system_svc.go:56] duration metric: took 11.174266ms WaitForService to wait for kubelet.
	I1208 18:21:43.767337  384440 kubeadm.go:581] duration metric: took 17.237293279s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1208 18:21:43.767355  384440 node_conditions.go:102] verifying NodePressure condition ...
	I1208 18:21:43.950804  384440 request.go:629] Waited for 183.347728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1208 18:21:43.953695  384440 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:21:43.953720  384440 node_conditions.go:123] node cpu capacity is 8
	I1208 18:21:43.953733  384440 node_conditions.go:105] duration metric: took 186.373925ms to run NodePressure ...
	I1208 18:21:43.953744  384440 start.go:228] waiting for startup goroutines ...
	I1208 18:21:43.953750  384440 start.go:233] waiting for cluster config update ...
	I1208 18:21:43.953759  384440 start.go:242] writing updated cluster config ...
	I1208 18:21:43.954025  384440 ssh_runner.go:195] Run: rm -f paused
	I1208 18:21:44.001041  384440 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1208 18:21:44.003746  384440 out.go:177] 
	W1208 18:21:44.005585  384440 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1208 18:21:44.007177  384440 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1208 18:21:44.008669  384440 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-722179" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 08 18:24:27 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:27.007342949Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-w9pxs/hello-world-app" id=64a9f1f5-c0f7-4f4c-92b5-9116bafe944c name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 08 18:24:27 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:27.007479473Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:24:27 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:27.096694266Z" level=info msg="Created container e09022d1da5e8fdd068bcfc0a0ca303d2b380a84d458ddc3a2f3f4bc18cbe285: default/hello-world-app-5f5d8b66bb-w9pxs/hello-world-app" id=64a9f1f5-c0f7-4f4c-92b5-9116bafe944c name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 08 18:24:27 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:27.097244761Z" level=info msg="Starting container: e09022d1da5e8fdd068bcfc0a0ca303d2b380a84d458ddc3a2f3f4bc18cbe285" id=faea8e2f-fb91-445f-bf3b-04a21c6d4298 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Dec 08 18:24:27 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:27.105993102Z" level=info msg="Started container" PID=4914 containerID=e09022d1da5e8fdd068bcfc0a0ca303d2b380a84d458ddc3a2f3f4bc18cbe285 description=default/hello-world-app-5f5d8b66bb-w9pxs/hello-world-app id=faea8e2f-fb91-445f-bf3b-04a21c6d4298 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=b9b5817c8798d356c5d9e3b5cb1785fad5d2330b02438faf29c27958cd96c8c4
	Dec 08 18:24:36 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:36.883112124Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=9397a522-4d2f-4103-903d-868587748e1d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 08 18:24:42 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:42.882697427Z" level=info msg="Stopping pod sandbox: 1706c6a626dc1ff3554f226a2ebe5dd47e7213c966dfea66c3a708eec22bf99a" id=a6088158-4970-46f0-83db-d18ce43f9315 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 08 18:24:42 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:42.883773696Z" level=info msg="Stopped pod sandbox: 1706c6a626dc1ff3554f226a2ebe5dd47e7213c966dfea66c3a708eec22bf99a" id=a6088158-4970-46f0-83db-d18ce43f9315 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 08 18:24:44 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:44.136104248Z" level=info msg="Stopping container: cd88ddce05acd037bb4f13eb18d093e1e1b2185b20a8df7ee826ab5457f659f2 (timeout: 2s)" id=68cf4ea1-0243-4fb9-8efa-58ebd0c02d55 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 08 18:24:44 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:44.138278040Z" level=info msg="Stopping container: cd88ddce05acd037bb4f13eb18d093e1e1b2185b20a8df7ee826ab5457f659f2 (timeout: 2s)" id=c29658e3-fe1a-4e2f-b7ea-dd62176a3cb8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.145654656Z" level=warning msg="Stopping container cd88ddce05acd037bb4f13eb18d093e1e1b2185b20a8df7ee826ab5457f659f2 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=68cf4ea1-0243-4fb9-8efa-58ebd0c02d55 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 08 18:24:46 ingress-addon-legacy-722179 conmon[3452]: conmon cd88ddce05acd037bb4f <ninfo>: container 3464 exited with status 137
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.309121838Z" level=info msg="Stopped container cd88ddce05acd037bb4f13eb18d093e1e1b2185b20a8df7ee826ab5457f659f2: ingress-nginx/ingress-nginx-controller-7fcf777cb7-tvknw/controller" id=c29658e3-fe1a-4e2f-b7ea-dd62176a3cb8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.309192604Z" level=info msg="Stopped container cd88ddce05acd037bb4f13eb18d093e1e1b2185b20a8df7ee826ab5457f659f2: ingress-nginx/ingress-nginx-controller-7fcf777cb7-tvknw/controller" id=68cf4ea1-0243-4fb9-8efa-58ebd0c02d55 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.309758131Z" level=info msg="Stopping pod sandbox: c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180" id=b53c98eb-b930-4eab-8fac-1a069b7baee4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.309758388Z" level=info msg="Stopping pod sandbox: c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180" id=6917d815-3c39-4936-9933-d8fa541e90a4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.313131260Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-X5Z5SWMPVZJU6WVZ - [0:0]\n:KUBE-HP-NX6RMQJIB5367G2A - [0:0]\n-X KUBE-HP-X5Z5SWMPVZJU6WVZ\n-X KUBE-HP-NX6RMQJIB5367G2A\nCOMMIT\n"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.314421682Z" level=info msg="Closing host port tcp:80"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.314486687Z" level=info msg="Closing host port tcp:443"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.315456541Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.315475134Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.315601940Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-tvknw Namespace:ingress-nginx ID:c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180 UID:82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde NetNS:/var/run/netns/c27b0b81-9c28-4580-9cdc-74ca88966ecf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.315726681Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-tvknw from CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.351897568Z" level=info msg="Stopped pod sandbox: c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180" id=b53c98eb-b930-4eab-8fac-1a069b7baee4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 08 18:24:46 ingress-addon-legacy-722179 crio[958]: time="2023-12-08 18:24:46.352014306Z" level=info msg="Stopped pod sandbox (already stopped): c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180" id=6917d815-3c39-4936-9933-d8fa541e90a4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e09022d1da5e8       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            24 seconds ago      Running             hello-world-app           0                   b9b5817c8798d       hello-world-app-5f5d8b66bb-w9pxs
	8320e320f5f54       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   604248f23fe07       nginx
	cd88ddce05acd       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   c9f4ae81c116e       ingress-nginx-controller-7fcf777cb7-tvknw
	21199dd0cd9c0       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   b0f9d9ecdec7b       ingress-nginx-admission-patch-td2nj
	ea26edcc7cf41       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   f77025ab0fe81       ingress-nginx-admission-create-f2gpj
	e05e8ad770323       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   857143970abe7       coredns-66bff467f8-d79fj
	f7b9ee5e6eacd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   6e0f4eb633139       storage-provisioner
	04bacfcb8ff93       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   371d1c3e0376d       kindnet-q7zdj
	1d595bdbd89e9       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   fde99d121cf17       kube-proxy-2zcdd
	542ddfbb29a77       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   8b5b9bd6a1a96       kube-scheduler-ingress-addon-legacy-722179
	959385d803522       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   2605dad013e3b       etcd-ingress-addon-legacy-722179
	a4ad524ff7150       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   369ccd7bdc20b       kube-controller-manager-ingress-addon-legacy-722179
	a9367c8329fc1       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   11ba1d6af316b       kube-apiserver-ingress-addon-legacy-722179
	
	* 
	* ==> coredns [e05e8ad77032334e4d3fb713519d074f5d0617bbc3f228ab87e78fd9179a40be] <==
	* [INFO] 10.244.0.5:33628 - 54502 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004658688s
	[INFO] 10.244.0.5:33628 - 10685 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005262224s
	[INFO] 10.244.0.5:59472 - 42379 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005751813s
	[INFO] 10.244.0.5:47155 - 18923 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005532933s
	[INFO] 10.244.0.5:50035 - 18308 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005731133s
	[INFO] 10.244.0.5:34900 - 54111 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005557874s
	[INFO] 10.244.0.5:39646 - 51819 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005669681s
	[INFO] 10.244.0.5:52032 - 42993 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005769203s
	[INFO] 10.244.0.5:49213 - 28429 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00569622s
	[INFO] 10.244.0.5:34900 - 25601 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005002679s
	[INFO] 10.244.0.5:52032 - 32026 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004824977s
	[INFO] 10.244.0.5:59472 - 40301 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005243727s
	[INFO] 10.244.0.5:49213 - 54943 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005057246s
	[INFO] 10.244.0.5:50035 - 61483 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005232937s
	[INFO] 10.244.0.5:47155 - 62355 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005233484s
	[INFO] 10.244.0.5:34900 - 16278 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069789s
	[INFO] 10.244.0.5:59472 - 8821 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064149s
	[INFO] 10.244.0.5:33628 - 10110 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005473163s
	[INFO] 10.244.0.5:49213 - 63886 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096371s
	[INFO] 10.244.0.5:39646 - 21645 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005353936s
	[INFO] 10.244.0.5:50035 - 39645 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063142s
	[INFO] 10.244.0.5:33628 - 38118 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038178s
	[INFO] 10.244.0.5:47155 - 28524 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094753s
	[INFO] 10.244.0.5:39646 - 23933 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073285s
	[INFO] 10.244.0.5:52032 - 48542 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060154s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-722179
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-722179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=ingress-addon-legacy-722179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_08T18_21_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:21:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-722179
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Dec 2023 18:24:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:24:41 +0000   Fri, 08 Dec 2023 18:21:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:24:41 +0000   Fri, 08 Dec 2023 18:21:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:24:41 +0000   Fri, 08 Dec 2023 18:21:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:24:41 +0000   Fri, 08 Dec 2023 18:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-722179
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ddb86134c1d4a3d9012daea1e005cbd
	  System UUID:                14980dde-0728-45f3-9d68-2646633a6f3d
	  Boot ID:                    fbb3830a-6e88-496f-844f-172e564c45c3
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-w9pxs                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-d79fj                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m25s
	  kube-system                 etcd-ingress-addon-legacy-722179                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kindnet-q7zdj                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m25s
	  kube-system                 kube-apiserver-ingress-addon-legacy-722179             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-722179    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-proxy-2zcdd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-scheduler-ingress-addon-legacy-722179             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m48s (x5 over 3m49s)  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet     Node ingress-addon-legacy-722179 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m21s                  kubelet     Node ingress-addon-legacy-722179 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007355] FS-Cache: O-key=[8] 'b9a20f0200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006690] FS-Cache: N-cookie d=0000000059c528da{9p.inode} n=0000000061bf7b75
	[  +0.008747] FS-Cache: N-key=[8] 'b9a20f0200000000'
	[  +4.078898] FS-Cache: Duplicate cookie detected
	[  +0.004678] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006765] FS-Cache: O-cookie d=00000000a0c9b1c7{9P.session} n=00000000c84b6137
	[  +0.007522] FS-Cache: O-key=[10] '34323936373230333034'
	[  +0.005375] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006569] FS-Cache: N-cookie d=00000000a0c9b1c7{9P.session} n=00000000bc9fd172
	[  +0.008904] FS-Cache: N-key=[10] '34323936373230333034'
	[Dec 8 18:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +1.023718] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +2.015782] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +4.127562] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[ +16.126373] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[Dec 8 18:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	
	* 
	* ==> etcd [959385d80352290bef5f839d73bc7202b44e7734390224d642023b7ed6f0fa16] <==
	* raft2023/12/08 18:21:03 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/08 18:21:03 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/08 18:21:03 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/08 18:21:03 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-08 18:21:03.945853 W | auth: simple token is not cryptographically signed
	2023-12-08 18:21:03.949271 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-08 18:21:03.950131 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/08 18:21:03 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-08 18:21:03.950831 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-08 18:21:03.952061 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-08 18:21:03.952232 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-08 18:21:03.952310 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/08 18:21:04 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/08 18:21:04 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/08 18:21:04 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/08 18:21:04 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/08 18:21:04 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-08 18:21:04.442402 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-08 18:21:04.443339 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-08 18:21:04.443421 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-08 18:21:04.443429 I | etcdserver: published {Name:ingress-addon-legacy-722179 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-08 18:21:04.443440 I | embed: ready to serve client requests
	2023-12-08 18:21:04.443462 I | embed: ready to serve client requests
	2023-12-08 18:21:04.445704 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-08 18:21:04.445845 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  18:24:51 up  2:06,  0 users,  load average: 0.43, 0.71, 0.63
	Linux ingress-addon-legacy-722179 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [04bacfcb8ff9308e3e7010261cd0f7c324f1699f3e170aee888324e5a713dbc7] <==
	* I1208 18:22:49.889423       1 main.go:227] handling current node
	I1208 18:22:59.901555       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:22:59.901580       1 main.go:227] handling current node
	I1208 18:23:09.906811       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:09.906835       1 main.go:227] handling current node
	I1208 18:23:19.916557       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:19.916583       1 main.go:227] handling current node
	I1208 18:23:29.925592       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:29.925618       1 main.go:227] handling current node
	I1208 18:23:39.929607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:39.929635       1 main.go:227] handling current node
	I1208 18:23:49.933277       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:49.933305       1 main.go:227] handling current node
	I1208 18:23:59.945100       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:23:59.945124       1 main.go:227] handling current node
	I1208 18:24:09.951319       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:24:09.951344       1 main.go:227] handling current node
	I1208 18:24:19.954799       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:24:19.954827       1 main.go:227] handling current node
	I1208 18:24:29.958918       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:24:29.958944       1 main.go:227] handling current node
	I1208 18:24:39.969600       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:24:39.969628       1 main.go:227] handling current node
	I1208 18:24:49.981495       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1208 18:24:49.981520       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a9367c8329fc1e7abbe807795dc704a5a1365069974d40217d1c6a4134abbe85] <==
	* I1208 18:21:07.734507       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1208 18:21:07.735722       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1208 18:21:07.833877       1 cache.go:39] Caches are synced for autoregister controller
	I1208 18:21:07.833877       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1208 18:21:07.833903       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1208 18:21:07.833915       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1208 18:21:07.836717       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1208 18:21:08.733074       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1208 18:21:08.733174       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1208 18:21:08.737316       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1208 18:21:08.739957       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1208 18:21:08.739976       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1208 18:21:09.049716       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 18:21:09.077689       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1208 18:21:09.146309       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1208 18:21:09.147217       1 controller.go:609] quota admission added evaluator for: endpoints
	I1208 18:21:09.150357       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 18:21:10.081908       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1208 18:21:10.527898       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1208 18:21:10.692491       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1208 18:21:10.868428       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 18:21:25.990767       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1208 18:21:26.382396       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1208 18:21:44.682800       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1208 18:22:04.308728       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [a4ad524ff71507b20ef2b563cf597ea47f500a1820d3d072944c77b6598a96eb] <==
	* I1208 18:21:26.384239       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d33d9102-5f86-4c1c-8789-468687bf95d9", APIVersion:"apps/v1", ResourceVersion:"195", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1208 18:21:26.389845       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c5c998e5-30cd-45d7-a74c-e3680c156f96", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-gpwlz
	I1208 18:21:26.396124       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c5c998e5-30cd-45d7-a74c-e3680c156f96", APIVersion:"apps/v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-d79fj
	I1208 18:21:26.401904       1 shared_informer.go:230] Caches are synced for disruption 
	I1208 18:21:26.401923       1 disruption.go:339] Sending events to api server.
	I1208 18:21:26.531869       1 shared_informer.go:230] Caches are synced for resource quota 
	I1208 18:21:26.536765       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d33d9102-5f86-4c1c-8789-468687bf95d9", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1208 18:21:26.546092       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c5c998e5-30cd-45d7-a74c-e3680c156f96", APIVersion:"apps/v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-gpwlz
	I1208 18:21:26.618650       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1208 18:21:26.618689       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1208 18:21:26.618795       1 shared_informer.go:230] Caches are synced for resource quota 
	I1208 18:21:26.618657       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1208 18:21:26.618656       1 shared_informer.go:230] Caches are synced for endpoint 
	I1208 18:21:27.021136       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1208 18:21:27.021187       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1208 18:21:35.932469       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1208 18:21:44.675006       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6d17b29f-7e8e-44a1-8b3a-a7f1b31c0177", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1208 18:21:44.723178       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"c6e1d6a3-9c93-4a06-99de-cce602d440eb", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-tvknw
	I1208 18:21:44.726309       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6dc44546-7870-43cb-9421-42902815d6e7", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-f2gpj
	I1208 18:21:44.831993       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"68b0d966-c533-4f0a-bccd-4a6a496eb23a", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-td2nj
	I1208 18:21:48.064578       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6dc44546-7870-43cb-9421-42902815d6e7", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1208 18:21:49.065887       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"68b0d966-c533-4f0a-bccd-4a6a496eb23a", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1208 18:24:25.243832       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7da6c821-93b5-4203-a422-d5c8bbeca75b", APIVersion:"apps/v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1208 18:24:25.250066       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"854e8630-f09d-40e7-8197-9052646adabc", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-w9pxs
	E1208 18:24:48.950975       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-xqwk9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [1d595bdbd89e9d695bcabd093e4e99eb776e4773ce981f006d723edffd07ee43] <==
	* W1208 18:21:26.668333       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1208 18:21:26.723646       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1208 18:21:26.723684       1 server_others.go:186] Using iptables Proxier.
	I1208 18:21:26.724034       1 server.go:583] Version: v1.18.20
	I1208 18:21:26.724536       1 config.go:133] Starting endpoints config controller
	I1208 18:21:26.724557       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1208 18:21:26.724589       1 config.go:315] Starting service config controller
	I1208 18:21:26.724593       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1208 18:21:26.824735       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1208 18:21:26.824738       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [542ddfbb29a772bdd150666a50210f6267609d574856785477eae9539e45bcd4] <==
	* W1208 18:21:07.760217       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 18:21:07.760227       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 18:21:07.760235       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 18:21:07.828136       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1208 18:21:07.828163       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1208 18:21:07.830592       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 18:21:07.830673       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1208 18:21:07.831431       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1208 18:21:07.832192       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1208 18:21:07.832384       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1208 18:21:07.832629       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:21:07.832629       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:21:07.833070       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1208 18:21:07.833311       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1208 18:21:07.833510       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1208 18:21:07.833585       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1208 18:21:07.833624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 18:21:07.833684       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1208 18:21:07.833837       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1208 18:21:07.834071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1208 18:21:07.834332       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1208 18:21:08.738012       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 18:21:08.875424       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1208 18:21:08.884993       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1208 18:21:11.430864       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 08 18:24:11 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:11.883347    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:11 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:11.883374    1868 pod_workers.go:191] Error syncing pod 5fb27a2d-1577-4a10-baca-eeb7cf65d22e ("kube-ingress-dns-minikube_kube-system(5fb27a2d-1577-4a10-baca-eeb7cf65d22e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 08 18:24:23 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:23.883493    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:23 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:23.883546    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:23 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:23.883603    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:23 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:23.883638    1868 pod_workers.go:191] Error syncing pod 5fb27a2d-1577-4a10-baca-eeb7cf65d22e ("kube-ingress-dns-minikube_kube-system(5fb27a2d-1577-4a10-baca-eeb7cf65d22e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 08 18:24:25 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:25.255900    1868 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 08 18:24:25 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:25.380436    1868 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kvjkr" (UniqueName: "kubernetes.io/secret/2cbadb63-8302-4ced-9012-86ccb6c2e502-default-token-kvjkr") pod "hello-world-app-5f5d8b66bb-w9pxs" (UID: "2cbadb63-8302-4ced-9012-86ccb6c2e502")
	Dec 08 18:24:25 ingress-addon-legacy-722179 kubelet[1868]: W1208 18:24:25.607317    1868 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/c5d49c9b500c727e7ae41a3174324e357dfce6f007ae9ae3dbef9cb1160d8c3c/crio-b9b5817c8798d356c5d9e3b5cb1785fad5d2330b02438faf29c27958cd96c8c4 WatchSource:0}: Error finding container b9b5817c8798d356c5d9e3b5cb1785fad5d2330b02438faf29c27958cd96c8c4: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc00067c040 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 08 18:24:36 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:36.883511    1868 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:36 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:36.883559    1868 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:36 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:36.883620    1868 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 08 18:24:36 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:36.883655    1868 pod_workers.go:191] Error syncing pod 5fb27a2d-1577-4a10-baca-eeb7cf65d22e ("kube-ingress-dns-minikube_kube-system(5fb27a2d-1577-4a10-baca-eeb7cf65d22e)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 08 18:24:41 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:41.053793    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-bw8jp" (UniqueName: "kubernetes.io/secret/5fb27a2d-1577-4a10-baca-eeb7cf65d22e-minikube-ingress-dns-token-bw8jp") pod "5fb27a2d-1577-4a10-baca-eeb7cf65d22e" (UID: "5fb27a2d-1577-4a10-baca-eeb7cf65d22e")
	Dec 08 18:24:41 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:41.055692    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb27a2d-1577-4a10-baca-eeb7cf65d22e-minikube-ingress-dns-token-bw8jp" (OuterVolumeSpecName: "minikube-ingress-dns-token-bw8jp") pod "5fb27a2d-1577-4a10-baca-eeb7cf65d22e" (UID: "5fb27a2d-1577-4a10-baca-eeb7cf65d22e"). InnerVolumeSpecName "minikube-ingress-dns-token-bw8jp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 08 18:24:41 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:41.154085    1868 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-bw8jp" (UniqueName: "kubernetes.io/secret/5fb27a2d-1577-4a10-baca-eeb7cf65d22e-minikube-ingress-dns-token-bw8jp") on node "ingress-addon-legacy-722179" DevicePath ""
	Dec 08 18:24:44 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:44.137095    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tvknw.179eee725c59780c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tvknw", UID:"82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-722179"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154f5bb0816400c, ext:213647069187, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154f5bb0816400c, ext:213647069187, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tvknw.179eee725c59780c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 08 18:24:44 ingress-addon-legacy-722179 kubelet[1868]: E1208 18:24:44.140754    1868 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tvknw.179eee725c59780c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tvknw", UID:"82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-722179"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154f5bb0816400c, ext:213647069187, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154f5bb0839f7bf, ext:213649409973, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tvknw.179eee725c59780c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 08 18:24:46 ingress-addon-legacy-722179 kubelet[1868]: W1208 18:24:46.384247    1868 pod_container_deletor.go:77] Container "c9f4ae81c116e358c908942c88a760c5922e900bbe2139107df69adf44b62180" not found in pod's containers
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.270999    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-webhook-cert") pod "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde" (UID: "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde")
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.271055    1868 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bskhx" (UniqueName: "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-ingress-nginx-token-bskhx") pod "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde" (UID: "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde")
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.273052    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-ingress-nginx-token-bskhx" (OuterVolumeSpecName: "ingress-nginx-token-bskhx") pod "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde" (UID: "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde"). InnerVolumeSpecName "ingress-nginx-token-bskhx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.273143    1868 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde" (UID: "82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.371372    1868 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-webhook-cert") on node "ingress-addon-legacy-722179" DevicePath ""
	Dec 08 18:24:48 ingress-addon-legacy-722179 kubelet[1868]: I1208 18:24:48.371427    1868 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bskhx" (UniqueName: "kubernetes.io/secret/82dfa11a-6fd5-4f0f-a4f9-26f1e7580dde-ingress-nginx-token-bskhx") on node "ingress-addon-legacy-722179" DevicePath ""
	
	* 
	* ==> storage-provisioner [f7b9ee5e6eacdd24e637be807a423748d9635e98133509808a54885df1e400e8] <==
	* I1208 18:21:31.883527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1208 18:21:31.890573       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1208 18:21:31.890608       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1208 18:21:32.057596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1208 18:21:32.057693       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7138f6ac-faee-485b-94aa-a26eec8a4d10", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-722179_0d5d363a-cf5b-470e-8a71-c136eea63a52 became leader
	I1208 18:21:32.057816       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-722179_0d5d363a-cf5b-470e-8a71-c136eea63a52!
	I1208 18:21:32.158063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-722179_0d5d363a-cf5b-470e-8a71-c136eea63a52!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-722179 -n ingress-addon-legacy-722179
E1208 18:24:52.373168  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-722179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (192.373403ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-mb9gz): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- sh -c "ping -c 1 192.168.58.1": exit status 1 (176.626222ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wwj6s): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985452
helpers_test.go:235: (dbg) docker inspect multinode-985452:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7",
	        "Created": "2023-12-08T18:30:03.747589759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 430525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-08T18:30:04.012126422Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7e83e141d5f1084600bb5c7d15c9e2fd69083458051c2cf9d21dfd6243a0ff9b",
	        "ResolvConfPath": "/var/lib/docker/containers/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/hosts",
	        "LogPath": "/var/lib/docker/containers/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7-json.log",
	        "Name": "/multinode-985452",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-985452:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-985452",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/53d78516bda936bd509253ffd74354002186ff8b6d49cd0377b7fbcc70e764dd-init/diff:/var/lib/docker/overlay2/f01fd4b86350391aeb4ddce306a73284c32c8168179c226f9bf8857f27cbe54b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53d78516bda936bd509253ffd74354002186ff8b6d49cd0377b7fbcc70e764dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53d78516bda936bd509253ffd74354002186ff8b6d49cd0377b7fbcc70e764dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53d78516bda936bd509253ffd74354002186ff8b6d49cd0377b7fbcc70e764dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-985452",
	                "Source": "/var/lib/docker/volumes/multinode-985452/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-985452",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-985452",
	                "name.minikube.sigs.k8s.io": "multinode-985452",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be7115302d7507625fc2adb29b72e8caa2806c5dedd9a82da6efce80466b1429",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be7115302d75",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-985452": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7f6d7ec17b65",
	                        "multinode-985452"
	                    ],
	                    "NetworkID": "edf9e9ab014370c29f5c0eb2d59f6dd55a3feb185324efb4bed4a59cfecd49a8",
	                    "EndpointID": "3fa31127f189b577cbaf7c3566c6877c4be4dbc99c1f6450735ed13d331b7e0c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-985452 -n multinode-985452
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-985452 logs -n 25: (1.19693356s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-414543                           | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-414543 ssh -- ls                    | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-395968                           | mount-start-1-395968 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-414543 ssh -- ls                    | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-414543                           | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	| start   | -p mount-start-2-414543                           | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	| ssh     | mount-start-2-414543 ssh -- ls                    | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-414543                           | mount-start-2-414543 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	| delete  | -p mount-start-1-395968                           | mount-start-1-395968 | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:29 UTC |
	| start   | -p multinode-985452                               | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:29 UTC | 08 Dec 23 18:31 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- apply -f                   | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- rollout                    | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- get pods -o                | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- get pods -o                | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-mb9gz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-wwj6s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-mb9gz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-wwj6s --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-mb9gz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-wwj6s -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- get pods -o                | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-mb9gz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC |                     |
	|         | busybox-5bc68d56bd-mb9gz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC | 08 Dec 23 18:31 UTC |
	|         | busybox-5bc68d56bd-wwj6s                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-985452 -- exec                       | multinode-985452     | jenkins | v1.32.0 | 08 Dec 23 18:31 UTC |                     |
	|         | busybox-5bc68d56bd-wwj6s -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:29:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:29:57.697528  429920 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:29:57.697806  429920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:29:57.697816  429920 out.go:309] Setting ErrFile to fd 2...
	I1208 18:29:57.697824  429920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:29:57.698040  429920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:29:57.698674  429920 out.go:303] Setting JSON to false
	I1208 18:29:57.699722  429920 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7898,"bootTime":1702052300,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:29:57.699788  429920 start.go:138] virtualization: kvm guest
	I1208 18:29:57.701935  429920 out.go:177] * [multinode-985452] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:29:57.703305  429920 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:29:57.704621  429920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:29:57.703347  429920 notify.go:220] Checking for updates...
	I1208 18:29:57.707554  429920 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:29:57.708996  429920 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:29:57.710487  429920 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:29:57.711973  429920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:29:57.713460  429920 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:29:57.734560  429920 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:29:57.734694  429920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:29:57.787049  429920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-08 18:29:57.778651159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:29:57.787145  429920 docker.go:295] overlay module found
	I1208 18:29:57.788885  429920 out.go:177] * Using the docker driver based on user configuration
	I1208 18:29:57.790197  429920 start.go:298] selected driver: docker
	I1208 18:29:57.790207  429920 start.go:902] validating driver "docker" against <nil>
	I1208 18:29:57.790220  429920 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:29:57.791004  429920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:29:57.843595  429920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-08 18:29:57.835527782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:29:57.843783  429920 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 18:29:57.844039  429920 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 18:29:57.845674  429920 out.go:177] * Using Docker driver with root privileges
	I1208 18:29:57.846865  429920 cni.go:84] Creating CNI manager for ""
	I1208 18:29:57.846891  429920 cni.go:136] 0 nodes found, recommending kindnet
	I1208 18:29:57.846906  429920 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 18:29:57.846923  429920 start_flags.go:323] config:
	{Name:multinode-985452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:29:57.848413  429920 out.go:177] * Starting control plane node multinode-985452 in cluster multinode-985452
	I1208 18:29:57.849652  429920 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:29:57.850873  429920 out.go:177] * Pulling base image ...
	I1208 18:29:57.852068  429920 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:29:57.852105  429920 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1208 18:29:57.852115  429920 cache.go:56] Caching tarball of preloaded images
	I1208 18:29:57.852145  429920 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:29:57.852191  429920 preload.go:174] Found /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 18:29:57.852201  429920 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1208 18:29:57.852544  429920 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json ...
	I1208 18:29:57.852567  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json: {Name:mk6a195c380447f93044a16b7a04d3ee504569f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:29:57.867948  429920 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon, skipping pull
	I1208 18:29:57.867986  429920 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in daemon, skipping load
	I1208 18:29:57.868003  429920 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:29:57.868053  429920 start.go:365] acquiring machines lock for multinode-985452: {Name:mkf5ff44da211d58cfaf087d39b8fbf4eb365996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:29:57.868162  429920 start.go:369] acquired machines lock for "multinode-985452" in 78.41µs
	I1208 18:29:57.868185  429920 start.go:93] Provisioning new machine with config: &{Name:multinode-985452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:29:57.868277  429920 start.go:125] createHost starting for "" (driver="docker")
	I1208 18:29:57.870267  429920 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1208 18:29:57.870518  429920 start.go:159] libmachine.API.Create for "multinode-985452" (driver="docker")
	I1208 18:29:57.870571  429920 client.go:168] LocalClient.Create starting
	I1208 18:29:57.870647  429920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem
	I1208 18:29:57.870686  429920 main.go:141] libmachine: Decoding PEM data...
	I1208 18:29:57.870709  429920 main.go:141] libmachine: Parsing certificate...
	I1208 18:29:57.870771  429920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem
	I1208 18:29:57.870791  429920 main.go:141] libmachine: Decoding PEM data...
	I1208 18:29:57.870799  429920 main.go:141] libmachine: Parsing certificate...
	I1208 18:29:57.871136  429920 cli_runner.go:164] Run: docker network inspect multinode-985452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 18:29:57.886607  429920 cli_runner.go:211] docker network inspect multinode-985452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 18:29:57.886710  429920 network_create.go:281] running [docker network inspect multinode-985452] to gather additional debugging logs...
	I1208 18:29:57.886743  429920 cli_runner.go:164] Run: docker network inspect multinode-985452
	W1208 18:29:57.902419  429920 cli_runner.go:211] docker network inspect multinode-985452 returned with exit code 1
	I1208 18:29:57.902467  429920 network_create.go:284] error running [docker network inspect multinode-985452]: docker network inspect multinode-985452: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985452 not found
	I1208 18:29:57.902491  429920 network_create.go:286] output of [docker network inspect multinode-985452]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985452 not found
	
	** /stderr **
	I1208 18:29:57.902613  429920 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:29:57.918632  429920 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-78944ff53cb8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:da:35:5a:49} reservation:<nil>}
	I1208 18:29:57.919119  429920 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020d0f20}
	I1208 18:29:57.919149  429920 network_create.go:124] attempt to create docker network multinode-985452 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1208 18:29:57.919210  429920 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985452 multinode-985452
	I1208 18:29:57.969665  429920 network_create.go:108] docker network multinode-985452 192.168.58.0/24 created
	I1208 18:29:57.969698  429920 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-985452" container
	I1208 18:29:57.969756  429920 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 18:29:57.985407  429920 cli_runner.go:164] Run: docker volume create multinode-985452 --label name.minikube.sigs.k8s.io=multinode-985452 --label created_by.minikube.sigs.k8s.io=true
	I1208 18:29:58.002088  429920 oci.go:103] Successfully created a docker volume multinode-985452
	I1208 18:29:58.002174  429920 cli_runner.go:164] Run: docker run --rm --name multinode-985452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985452 --entrypoint /usr/bin/test -v multinode-985452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib
	I1208 18:29:58.485655  429920 oci.go:107] Successfully prepared a docker volume multinode-985452
	I1208 18:29:58.485707  429920 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:29:58.485730  429920 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 18:29:58.485794  429920 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 18:30:03.682770  429920 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.196931795s)
	I1208 18:30:03.682806  429920 kic.go:203] duration metric: took 5.197070 seconds to extract preloaded images to volume
	W1208 18:30:03.682983  429920 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 18:30:03.683110  429920 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 18:30:03.733146  429920 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-985452 --name multinode-985452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-985452 --network multinode-985452 --ip 192.168.58.2 --volume multinode-985452:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:30:04.019982  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Running}}
	I1208 18:30:04.037360  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:04.055461  429920 cli_runner.go:164] Run: docker exec multinode-985452 stat /var/lib/dpkg/alternatives/iptables
	I1208 18:30:04.111236  429920 oci.go:144] the created container "multinode-985452" has a running status.
	I1208 18:30:04.111280  429920 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa...
	I1208 18:30:04.249382  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1208 18:30:04.249427  429920 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 18:30:04.268238  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:04.283623  429920 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 18:30:04.283656  429920 kic_runner.go:114] Args: [docker exec --privileged multinode-985452 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 18:30:04.340287  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:04.362047  429920 machine.go:88] provisioning docker machine ...
	I1208 18:30:04.362084  429920 ubuntu.go:169] provisioning hostname "multinode-985452"
	I1208 18:30:04.362147  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:04.378294  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:30:04.378844  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1208 18:30:04.378874  429920 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985452 && echo "multinode-985452" | sudo tee /etc/hostname
	I1208 18:30:04.379558  429920 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57838->127.0.0.1:33149: read: connection reset by peer
	I1208 18:30:07.512643  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985452
	
	I1208 18:30:07.512757  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:07.528613  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:30:07.528940  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1208 18:30:07.528960  429920 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985452' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985452/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985452' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:30:07.650760  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:30:07.650817  429920 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:30:07.650858  429920 ubuntu.go:177] setting up certificates
	I1208 18:30:07.650873  429920 provision.go:83] configureAuth start
	I1208 18:30:07.650934  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452
	I1208 18:30:07.667147  429920 provision.go:138] copyHostCerts
	I1208 18:30:07.667189  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:30:07.667216  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem, removing ...
	I1208 18:30:07.667223  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:30:07.667285  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:30:07.667357  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:30:07.667386  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem, removing ...
	I1208 18:30:07.667397  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:30:07.667423  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:30:07.667465  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:30:07.667487  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem, removing ...
	I1208 18:30:07.667493  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:30:07.667512  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:30:07.667554  429920 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.multinode-985452 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-985452]
	I1208 18:30:07.743378  429920 provision.go:172] copyRemoteCerts
	I1208 18:30:07.743458  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:30:07.743496  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:07.760041  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:07.850797  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 18:30:07.850854  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:30:07.872704  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 18:30:07.872766  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 18:30:07.895287  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 18:30:07.895366  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 18:30:07.916700  429920 provision.go:86] duration metric: configureAuth took 265.808028ms
	I1208 18:30:07.916734  429920 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:30:07.916959  429920 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:30:07.917085  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:07.933257  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:30:07.933620  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33149 <nil> <nil>}
	I1208 18:30:07.933644  429920 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:30:08.138145  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:30:08.138176  429920 machine.go:91] provisioned docker machine in 3.776105828s
	I1208 18:30:08.138187  429920 client.go:171] LocalClient.Create took 10.267604409s
	I1208 18:30:08.138205  429920 start.go:167] duration metric: libmachine.API.Create for "multinode-985452" took 10.267689126s
	I1208 18:30:08.138215  429920 start.go:300] post-start starting for "multinode-985452" (driver="docker")
	I1208 18:30:08.138227  429920 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:30:08.138292  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:30:08.138353  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:08.154707  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:08.246992  429920 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:30:08.250046  429920 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1208 18:30:08.250071  429920 command_runner.go:130] > NAME="Ubuntu"
	I1208 18:30:08.250080  429920 command_runner.go:130] > VERSION_ID="22.04"
	I1208 18:30:08.250086  429920 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1208 18:30:08.250093  429920 command_runner.go:130] > VERSION_CODENAME=jammy
	I1208 18:30:08.250098  429920 command_runner.go:130] > ID=ubuntu
	I1208 18:30:08.250133  429920 command_runner.go:130] > ID_LIKE=debian
	I1208 18:30:08.250148  429920 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1208 18:30:08.250161  429920 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1208 18:30:08.250175  429920 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1208 18:30:08.250190  429920 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1208 18:30:08.250201  429920 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1208 18:30:08.250271  429920 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:30:08.250308  429920 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:30:08.250326  429920 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:30:08.250339  429920 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1208 18:30:08.250354  429920 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:30:08.250409  429920 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:30:08.250609  429920 filesync.go:149] local asset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> 3436282.pem in /etc/ssl/certs
	I1208 18:30:08.250634  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /etc/ssl/certs/3436282.pem
	I1208 18:30:08.250762  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 18:30:08.258185  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:30:08.278658  429920 start.go:303] post-start completed in 140.42592ms
	I1208 18:30:08.279260  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452
	I1208 18:30:08.295043  429920 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json ...
	I1208 18:30:08.295283  429920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:30:08.295334  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:08.310831  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:08.394931  429920 command_runner.go:130] > 20%!
	(MISSING)I1208 18:30:08.395131  429920 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:30:08.399124  429920 command_runner.go:130] > 233G
	I1208 18:30:08.399269  429920 start.go:128] duration metric: createHost completed in 10.530978978s
	I1208 18:30:08.399291  429920 start.go:83] releasing machines lock for "multinode-985452", held for 10.531117249s
	I1208 18:30:08.399366  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452
	I1208 18:30:08.415989  429920 ssh_runner.go:195] Run: cat /version.json
	I1208 18:30:08.416043  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:08.416092  429920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:30:08.416152  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:08.433479  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:08.433673  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:08.599303  429920 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 18:30:08.601380  429920 command_runner.go:130] > {"iso_version": "v1.32.1-1701788780-17711", "kicbase_version": "v0.0.42-1701996201-17738", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1208 18:30:08.601538  429920 ssh_runner.go:195] Run: systemctl --version
	I1208 18:30:08.605474  429920 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1208 18:30:08.605520  429920 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 18:30:08.605578  429920 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:30:08.743347  429920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:30:08.747504  429920 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1208 18:30:08.747537  429920 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1208 18:30:08.747548  429920 command_runner.go:130] > Device: 34h/52d	Inode: 1299647     Links: 1
	I1208 18:30:08.747558  429920 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:30:08.747572  429920 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1208 18:30:08.747583  429920 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1208 18:30:08.747588  429920 command_runner.go:130] > Change: 2023-12-08 18:10:37.396658804 +0000
	I1208 18:30:08.747593  429920 command_runner.go:130] >  Birth: 2023-12-08 18:10:37.396658804 +0000
	I1208 18:30:08.747655  429920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:30:08.765007  429920 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:30:08.765116  429920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:30:08.789868  429920 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1208 18:30:08.789910  429920 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1208 18:30:08.789921  429920 start.go:475] detecting cgroup driver to use...
	I1208 18:30:08.789969  429920 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:30:08.790025  429920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:30:08.803533  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:30:08.813340  429920 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:30:08.813396  429920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:30:08.825228  429920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:30:08.837621  429920 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 18:30:08.914742  429920 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:30:08.994268  429920 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1208 18:30:08.994300  429920 docker.go:219] disabling docker service ...
	I1208 18:30:08.994351  429920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:30:09.011296  429920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:30:09.020970  429920 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:30:09.030991  429920 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1208 18:30:09.101985  429920 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:30:09.179164  429920 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1208 18:30:09.179239  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:30:09.189307  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:30:09.203324  429920 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 18:30:09.203376  429920 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1208 18:30:09.203429  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:30:09.212119  429920 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 18:30:09.212196  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:30:09.220739  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:30:09.228933  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:30:09.237498  429920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 18:30:09.245344  429920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 18:30:09.252099  429920 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 18:30:09.252725  429920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 18:30:09.260205  429920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 18:30:09.333963  429920 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 18:30:09.443982  429920 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 18:30:09.444075  429920 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 18:30:09.447753  429920 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 18:30:09.447775  429920 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 18:30:09.447781  429920 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1208 18:30:09.447788  429920 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:30:09.447796  429920 command_runner.go:130] > Access: 2023-12-08 18:30:09.432073750 +0000
	I1208 18:30:09.447813  429920 command_runner.go:130] > Modify: 2023-12-08 18:30:09.432073750 +0000
	I1208 18:30:09.447825  429920 command_runner.go:130] > Change: 2023-12-08 18:30:09.432073750 +0000
	I1208 18:30:09.447833  429920 command_runner.go:130] >  Birth: -
	I1208 18:30:09.447868  429920 start.go:543] Will wait 60s for crictl version
	I1208 18:30:09.447911  429920 ssh_runner.go:195] Run: which crictl
	I1208 18:30:09.451164  429920 command_runner.go:130] > /usr/bin/crictl
	I1208 18:30:09.451216  429920 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 18:30:09.481377  429920 command_runner.go:130] > Version:  0.1.0
	I1208 18:30:09.481418  429920 command_runner.go:130] > RuntimeName:  cri-o
	I1208 18:30:09.481425  429920 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1208 18:30:09.481432  429920 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 18:30:09.483413  429920 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1208 18:30:09.483506  429920 ssh_runner.go:195] Run: crio --version
	I1208 18:30:09.518150  429920 command_runner.go:130] > crio version 1.24.6
	I1208 18:30:09.518173  429920 command_runner.go:130] > Version:          1.24.6
	I1208 18:30:09.518179  429920 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1208 18:30:09.518184  429920 command_runner.go:130] > GitTreeState:     clean
	I1208 18:30:09.518190  429920 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1208 18:30:09.518195  429920 command_runner.go:130] > GoVersion:        go1.18.2
	I1208 18:30:09.518199  429920 command_runner.go:130] > Compiler:         gc
	I1208 18:30:09.518203  429920 command_runner.go:130] > Platform:         linux/amd64
	I1208 18:30:09.518208  429920 command_runner.go:130] > Linkmode:         dynamic
	I1208 18:30:09.518215  429920 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1208 18:30:09.518223  429920 command_runner.go:130] > SeccompEnabled:   true
	I1208 18:30:09.518230  429920 command_runner.go:130] > AppArmorEnabled:  false
	I1208 18:30:09.518292  429920 ssh_runner.go:195] Run: crio --version
	I1208 18:30:09.550493  429920 command_runner.go:130] > crio version 1.24.6
	I1208 18:30:09.550522  429920 command_runner.go:130] > Version:          1.24.6
	I1208 18:30:09.550534  429920 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1208 18:30:09.550543  429920 command_runner.go:130] > GitTreeState:     clean
	I1208 18:30:09.550553  429920 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1208 18:30:09.550562  429920 command_runner.go:130] > GoVersion:        go1.18.2
	I1208 18:30:09.550570  429920 command_runner.go:130] > Compiler:         gc
	I1208 18:30:09.550587  429920 command_runner.go:130] > Platform:         linux/amd64
	I1208 18:30:09.550597  429920 command_runner.go:130] > Linkmode:         dynamic
	I1208 18:30:09.550607  429920 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1208 18:30:09.550616  429920 command_runner.go:130] > SeccompEnabled:   true
	I1208 18:30:09.550621  429920 command_runner.go:130] > AppArmorEnabled:  false
	I1208 18:30:09.552881  429920 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1208 18:30:09.554470  429920 cli_runner.go:164] Run: docker network inspect multinode-985452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:30:09.570547  429920 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1208 18:30:09.574020  429920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:30:09.584136  429920 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:30:09.584183  429920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:30:09.637250  429920 command_runner.go:130] > {
	I1208 18:30:09.637272  429920 command_runner.go:130] >   "images": [
	I1208 18:30:09.637276  429920 command_runner.go:130] >     {
	I1208 18:30:09.637284  429920 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1208 18:30:09.637290  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.637303  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1208 18:30:09.637318  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637329  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.637341  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1208 18:30:09.637350  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1208 18:30:09.637356  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637361  429920 command_runner.go:130] >       "size": "65258016",
	I1208 18:30:09.637367  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.637371  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.637383  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.637389  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.637397  429920 command_runner.go:130] >     },
	I1208 18:30:09.637405  429920 command_runner.go:130] >     {
	I1208 18:30:09.637415  429920 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1208 18:30:09.637434  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.637445  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 18:30:09.637452  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637456  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.637467  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1208 18:30:09.637486  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1208 18:30:09.637497  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637511  429920 command_runner.go:130] >       "size": "31470524",
	I1208 18:30:09.637521  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.637529  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.637538  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.637546  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.637550  429920 command_runner.go:130] >     },
	I1208 18:30:09.637559  429920 command_runner.go:130] >     {
	I1208 18:30:09.637573  429920 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1208 18:30:09.637584  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.637593  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1208 18:30:09.637602  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637609  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.637622  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1208 18:30:09.637632  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1208 18:30:09.637641  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637648  429920 command_runner.go:130] >       "size": "53621675",
	I1208 18:30:09.637667  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.637677  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.637687  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.637696  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.637705  429920 command_runner.go:130] >     },
	I1208 18:30:09.637710  429920 command_runner.go:130] >     {
	I1208 18:30:09.637717  429920 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1208 18:30:09.637722  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.637730  429920 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1208 18:30:09.637740  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637748  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.637762  429920 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1208 18:30:09.637778  429920 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1208 18:30:09.637794  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637802  429920 command_runner.go:130] >       "size": "295456551",
	I1208 18:30:09.637806  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.637813  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.637823  429920 command_runner.go:130] >       },
	I1208 18:30:09.637833  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.637843  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.637853  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.637860  429920 command_runner.go:130] >     },
	I1208 18:30:09.637873  429920 command_runner.go:130] >     {
	I1208 18:30:09.637884  429920 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1208 18:30:09.637890  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.637899  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1208 18:30:09.637908  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637916  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.637930  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1208 18:30:09.637945  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1208 18:30:09.637953  429920 command_runner.go:130] >       ],
	I1208 18:30:09.637963  429920 command_runner.go:130] >       "size": "127226832",
	I1208 18:30:09.637968  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.637975  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.637980  429920 command_runner.go:130] >       },
	I1208 18:30:09.637991  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.638006  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.638016  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.638022  429920 command_runner.go:130] >     },
	I1208 18:30:09.638031  429920 command_runner.go:130] >     {
	I1208 18:30:09.638042  429920 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1208 18:30:09.638051  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.638058  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1208 18:30:09.638066  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638073  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.638090  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1208 18:30:09.638105  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1208 18:30:09.638115  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638122  429920 command_runner.go:130] >       "size": "123261750",
	I1208 18:30:09.638135  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.638143  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.638147  429920 command_runner.go:130] >       },
	I1208 18:30:09.638157  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.638166  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.638179  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.638188  429920 command_runner.go:130] >     },
	I1208 18:30:09.638195  429920 command_runner.go:130] >     {
	I1208 18:30:09.638208  429920 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1208 18:30:09.638218  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.638225  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1208 18:30:09.638232  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638237  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.638253  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1208 18:30:09.638269  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1208 18:30:09.638278  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638285  429920 command_runner.go:130] >       "size": "74749335",
	I1208 18:30:09.638295  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.638302  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.638310  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.638315  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.638322  429920 command_runner.go:130] >     },
	I1208 18:30:09.638329  429920 command_runner.go:130] >     {
	I1208 18:30:09.638347  429920 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1208 18:30:09.638358  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.638369  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1208 18:30:09.638378  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638385  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.638415  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1208 18:30:09.638437  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1208 18:30:09.638459  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638468  429920 command_runner.go:130] >       "size": "61551410",
	I1208 18:30:09.638478  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.638484  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.638495  429920 command_runner.go:130] >       },
	I1208 18:30:09.638503  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.638512  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.638520  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.638528  429920 command_runner.go:130] >     },
	I1208 18:30:09.638535  429920 command_runner.go:130] >     {
	I1208 18:30:09.638555  429920 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1208 18:30:09.638568  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.638579  429920 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1208 18:30:09.638588  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638596  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.638606  429920 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1208 18:30:09.638620  429920 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1208 18:30:09.638629  429920 command_runner.go:130] >       ],
	I1208 18:30:09.638640  429920 command_runner.go:130] >       "size": "750414",
	I1208 18:30:09.638649  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.638659  429920 command_runner.go:130] >         "value": "65535"
	I1208 18:30:09.638668  429920 command_runner.go:130] >       },
	I1208 18:30:09.638675  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.638685  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.638690  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.638694  429920 command_runner.go:130] >     }
	I1208 18:30:09.638698  429920 command_runner.go:130] >   ]
	I1208 18:30:09.638703  429920 command_runner.go:130] > }
	I1208 18:30:09.638961  429920 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:30:09.638985  429920 crio.go:415] Images already preloaded, skipping extraction
	I1208 18:30:09.639033  429920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 18:30:09.671392  429920 command_runner.go:130] > {
	I1208 18:30:09.671419  429920 command_runner.go:130] >   "images": [
	I1208 18:30:09.671424  429920 command_runner.go:130] >     {
	I1208 18:30:09.671436  429920 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1208 18:30:09.671441  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671446  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1208 18:30:09.671450  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671454  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671462  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1208 18:30:09.671469  429920 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1208 18:30:09.671482  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671489  429920 command_runner.go:130] >       "size": "65258016",
	I1208 18:30:09.671493  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.671498  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.671503  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.671511  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.671515  429920 command_runner.go:130] >     },
	I1208 18:30:09.671522  429920 command_runner.go:130] >     {
	I1208 18:30:09.671528  429920 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1208 18:30:09.671532  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671537  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 18:30:09.671541  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671545  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671552  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1208 18:30:09.671559  429920 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1208 18:30:09.671563  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671577  429920 command_runner.go:130] >       "size": "31470524",
	I1208 18:30:09.671584  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.671588  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.671592  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.671597  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.671600  429920 command_runner.go:130] >     },
	I1208 18:30:09.671605  429920 command_runner.go:130] >     {
	I1208 18:30:09.671615  429920 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1208 18:30:09.671622  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671627  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1208 18:30:09.671633  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671638  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671647  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1208 18:30:09.671654  429920 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1208 18:30:09.671660  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671664  429920 command_runner.go:130] >       "size": "53621675",
	I1208 18:30:09.671671  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.671680  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.671684  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.671691  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.671695  429920 command_runner.go:130] >     },
	I1208 18:30:09.671698  429920 command_runner.go:130] >     {
	I1208 18:30:09.671704  429920 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1208 18:30:09.671710  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671716  429920 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1208 18:30:09.671724  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671729  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671736  429920 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1208 18:30:09.671745  429920 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1208 18:30:09.671772  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671782  429920 command_runner.go:130] >       "size": "295456551",
	I1208 18:30:09.671786  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.671790  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.671797  429920 command_runner.go:130] >       },
	I1208 18:30:09.671802  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.671808  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.671812  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.671816  429920 command_runner.go:130] >     },
	I1208 18:30:09.671819  429920 command_runner.go:130] >     {
	I1208 18:30:09.671826  429920 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1208 18:30:09.671832  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671838  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1208 18:30:09.671844  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671851  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671861  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1208 18:30:09.671871  429920 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1208 18:30:09.671876  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671880  429920 command_runner.go:130] >       "size": "127226832",
	I1208 18:30:09.671885  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.671890  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.671896  429920 command_runner.go:130] >       },
	I1208 18:30:09.671899  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.671911  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.671915  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.671919  429920 command_runner.go:130] >     },
	I1208 18:30:09.671922  429920 command_runner.go:130] >     {
	I1208 18:30:09.671931  429920 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1208 18:30:09.671935  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.671943  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1208 18:30:09.671947  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671954  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.671963  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1208 18:30:09.671973  429920 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1208 18:30:09.671977  429920 command_runner.go:130] >       ],
	I1208 18:30:09.671981  429920 command_runner.go:130] >       "size": "123261750",
	I1208 18:30:09.671987  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.671991  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.671997  429920 command_runner.go:130] >       },
	I1208 18:30:09.672001  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.672005  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.672009  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.672015  429920 command_runner.go:130] >     },
	I1208 18:30:09.672018  429920 command_runner.go:130] >     {
	I1208 18:30:09.672027  429920 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1208 18:30:09.672031  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.672037  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1208 18:30:09.672041  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672045  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.672054  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1208 18:30:09.672066  429920 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1208 18:30:09.672072  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672076  429920 command_runner.go:130] >       "size": "74749335",
	I1208 18:30:09.672080  429920 command_runner.go:130] >       "uid": null,
	I1208 18:30:09.672084  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.672088  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.672092  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.672098  429920 command_runner.go:130] >     },
	I1208 18:30:09.672101  429920 command_runner.go:130] >     {
	I1208 18:30:09.672109  429920 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1208 18:30:09.672113  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.672119  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1208 18:30:09.672125  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672129  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.672153  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1208 18:30:09.672162  429920 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1208 18:30:09.672166  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672172  429920 command_runner.go:130] >       "size": "61551410",
	I1208 18:30:09.672180  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.672186  429920 command_runner.go:130] >         "value": "0"
	I1208 18:30:09.672190  429920 command_runner.go:130] >       },
	I1208 18:30:09.672195  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.672199  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.672204  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.672208  429920 command_runner.go:130] >     },
	I1208 18:30:09.672214  429920 command_runner.go:130] >     {
	I1208 18:30:09.672219  429920 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1208 18:30:09.672226  429920 command_runner.go:130] >       "repoTags": [
	I1208 18:30:09.672230  429920 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1208 18:30:09.672236  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672240  429920 command_runner.go:130] >       "repoDigests": [
	I1208 18:30:09.672249  429920 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1208 18:30:09.672258  429920 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1208 18:30:09.672262  429920 command_runner.go:130] >       ],
	I1208 18:30:09.672266  429920 command_runner.go:130] >       "size": "750414",
	I1208 18:30:09.672273  429920 command_runner.go:130] >       "uid": {
	I1208 18:30:09.672279  429920 command_runner.go:130] >         "value": "65535"
	I1208 18:30:09.672284  429920 command_runner.go:130] >       },
	I1208 18:30:09.672288  429920 command_runner.go:130] >       "username": "",
	I1208 18:30:09.672293  429920 command_runner.go:130] >       "spec": null,
	I1208 18:30:09.672302  429920 command_runner.go:130] >       "pinned": false
	I1208 18:30:09.672308  429920 command_runner.go:130] >     }
	I1208 18:30:09.672311  429920 command_runner.go:130] >   ]
	I1208 18:30:09.672314  429920 command_runner.go:130] > }
	I1208 18:30:09.672427  429920 crio.go:496] all images are preloaded for cri-o runtime.
	I1208 18:30:09.672443  429920 cache_images.go:84] Images are preloaded, skipping loading
	I1208 18:30:09.672498  429920 ssh_runner.go:195] Run: crio config
	I1208 18:30:09.708491  429920 command_runner.go:130] ! time="2023-12-08 18:30:09.708070155Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1208 18:30:09.708524  429920 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 18:30:09.713128  429920 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 18:30:09.713160  429920 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 18:30:09.713171  429920 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 18:30:09.713176  429920 command_runner.go:130] > #
	I1208 18:30:09.713188  429920 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 18:30:09.713202  429920 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 18:30:09.713215  429920 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 18:30:09.713238  429920 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 18:30:09.713246  429920 command_runner.go:130] > # reload'.
	I1208 18:30:09.713260  429920 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 18:30:09.713281  429920 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 18:30:09.713295  429920 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 18:30:09.713307  429920 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 18:30:09.713316  429920 command_runner.go:130] > [crio]
	I1208 18:30:09.713328  429920 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 18:30:09.713337  429920 command_runner.go:130] > # containers images, in this directory.
	I1208 18:30:09.713378  429920 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 18:30:09.713392  429920 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 18:30:09.713408  429920 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1208 18:30:09.713418  429920 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 18:30:09.713433  429920 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 18:30:09.713444  429920 command_runner.go:130] > # storage_driver = "vfs"
	I1208 18:30:09.713457  429920 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 18:30:09.713470  429920 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 18:30:09.713480  429920 command_runner.go:130] > # storage_option = [
	I1208 18:30:09.713488  429920 command_runner.go:130] > # ]
	I1208 18:30:09.713500  429920 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 18:30:09.713509  429920 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 18:30:09.713520  429920 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 18:30:09.713534  429920 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 18:30:09.713546  429920 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 18:30:09.713556  429920 command_runner.go:130] > # always happen on a node reboot
	I1208 18:30:09.713569  429920 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 18:30:09.713582  429920 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 18:30:09.713591  429920 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 18:30:09.713610  429920 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 18:30:09.713622  429920 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1208 18:30:09.713637  429920 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 18:30:09.713652  429920 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 18:30:09.713663  429920 command_runner.go:130] > # internal_wipe = true
	I1208 18:30:09.713674  429920 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 18:30:09.713684  429920 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 18:30:09.713696  429920 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 18:30:09.713709  429920 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 18:30:09.713722  429920 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 18:30:09.713731  429920 command_runner.go:130] > [crio.api]
	I1208 18:30:09.713744  429920 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 18:30:09.713755  429920 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 18:30:09.713763  429920 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 18:30:09.713773  429920 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 18:30:09.713788  429920 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 18:30:09.713800  429920 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 18:30:09.713810  429920 command_runner.go:130] > # stream_port = "0"
	I1208 18:30:09.713821  429920 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 18:30:09.713832  429920 command_runner.go:130] > # stream_enable_tls = false
	I1208 18:30:09.713844  429920 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 18:30:09.713851  429920 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 18:30:09.713861  429920 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 18:30:09.713875  429920 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1208 18:30:09.713884  429920 command_runner.go:130] > # minutes.
	I1208 18:30:09.713895  429920 command_runner.go:130] > # stream_tls_cert = ""
	I1208 18:30:09.713916  429920 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 18:30:09.713928  429920 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1208 18:30:09.713935  429920 command_runner.go:130] > # stream_tls_key = ""
	I1208 18:30:09.713947  429920 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 18:30:09.713962  429920 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 18:30:09.713974  429920 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1208 18:30:09.713984  429920 command_runner.go:130] > # stream_tls_ca = ""
	I1208 18:30:09.713999  429920 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1208 18:30:09.714010  429920 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 18:30:09.714021  429920 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1208 18:30:09.714031  429920 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 18:30:09.714070  429920 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 18:30:09.714083  429920 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 18:30:09.714092  429920 command_runner.go:130] > [crio.runtime]
	I1208 18:30:09.714103  429920 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 18:30:09.714113  429920 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 18:30:09.714123  429920 command_runner.go:130] > # "nofile=1024:2048"
	I1208 18:30:09.714137  429920 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 18:30:09.714147  429920 command_runner.go:130] > # default_ulimits = [
	I1208 18:30:09.714156  429920 command_runner.go:130] > # ]
	I1208 18:30:09.714169  429920 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 18:30:09.714182  429920 command_runner.go:130] > # no_pivot = false
	I1208 18:30:09.714195  429920 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 18:30:09.714207  429920 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 18:30:09.714220  429920 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 18:30:09.714232  429920 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 18:30:09.714244  429920 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 18:30:09.714257  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 18:30:09.714267  429920 command_runner.go:130] > # conmon = ""
	I1208 18:30:09.714275  429920 command_runner.go:130] > # Cgroup setting for conmon
	I1208 18:30:09.714286  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 18:30:09.714297  429920 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 18:30:09.714311  429920 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 18:30:09.714323  429920 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 18:30:09.714337  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 18:30:09.714351  429920 command_runner.go:130] > # conmon_env = [
	I1208 18:30:09.714356  429920 command_runner.go:130] > # ]
	I1208 18:30:09.714364  429920 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 18:30:09.714373  429920 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 18:30:09.714390  429920 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 18:30:09.714400  429920 command_runner.go:130] > # default_env = [
	I1208 18:30:09.714409  429920 command_runner.go:130] > # ]
	I1208 18:30:09.714421  429920 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 18:30:09.714431  429920 command_runner.go:130] > # selinux = false
	I1208 18:30:09.714442  429920 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 18:30:09.714470  429920 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1208 18:30:09.714483  429920 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1208 18:30:09.714494  429920 command_runner.go:130] > # seccomp_profile = ""
	I1208 18:30:09.714506  429920 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1208 18:30:09.714519  429920 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1208 18:30:09.714529  429920 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1208 18:30:09.714534  429920 command_runner.go:130] > # which might increase security.
	I1208 18:30:09.714545  429920 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1208 18:30:09.714559  429920 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 18:30:09.714572  429920 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 18:30:09.714585  429920 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 18:30:09.714598  429920 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 18:30:09.714614  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:30:09.714621  429920 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 18:30:09.714629  429920 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 18:30:09.714639  429920 command_runner.go:130] > # the cgroup blockio controller.
	I1208 18:30:09.714650  429920 command_runner.go:130] > # blockio_config_file = ""
	I1208 18:30:09.714664  429920 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 18:30:09.714675  429920 command_runner.go:130] > # irqbalance daemon.
	I1208 18:30:09.714687  429920 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 18:30:09.714700  429920 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 18:30:09.714708  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:30:09.714715  429920 command_runner.go:130] > # rdt_config_file = ""
	I1208 18:30:09.714724  429920 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 18:30:09.714735  429920 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 18:30:09.714748  429920 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 18:30:09.714759  429920 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 18:30:09.714772  429920 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 18:30:09.714785  429920 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 18:30:09.714793  429920 command_runner.go:130] > # will be added.
	I1208 18:30:09.714799  429920 command_runner.go:130] > # default_capabilities = [
	I1208 18:30:09.714809  429920 command_runner.go:130] > # 	"CHOWN",
	I1208 18:30:09.714819  429920 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 18:30:09.714828  429920 command_runner.go:130] > # 	"FSETID",
	I1208 18:30:09.714838  429920 command_runner.go:130] > # 	"FOWNER",
	I1208 18:30:09.714846  429920 command_runner.go:130] > # 	"SETGID",
	I1208 18:30:09.714856  429920 command_runner.go:130] > # 	"SETUID",
	I1208 18:30:09.714863  429920 command_runner.go:130] > # 	"SETPCAP",
	I1208 18:30:09.714873  429920 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 18:30:09.714880  429920 command_runner.go:130] > # 	"KILL",
	I1208 18:30:09.714887  429920 command_runner.go:130] > # ]
	I1208 18:30:09.714903  429920 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 18:30:09.714917  429920 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 18:30:09.714928  429920 command_runner.go:130] > # add_inheritable_capabilities = true
	I1208 18:30:09.714941  429920 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 18:30:09.714954  429920 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 18:30:09.714963  429920 command_runner.go:130] > # default_sysctls = [
	I1208 18:30:09.714969  429920 command_runner.go:130] > # ]
	I1208 18:30:09.714980  429920 command_runner.go:130] > # List of devices on the host that a
	I1208 18:30:09.714994  429920 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 18:30:09.715004  429920 command_runner.go:130] > # allowed_devices = [
	I1208 18:30:09.715014  429920 command_runner.go:130] > # 	"/dev/fuse",
	I1208 18:30:09.715023  429920 command_runner.go:130] > # ]
	I1208 18:30:09.715034  429920 command_runner.go:130] > # List of additional devices. specified as
	I1208 18:30:09.715101  429920 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 18:30:09.715120  429920 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 18:30:09.715130  429920 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 18:30:09.715140  429920 command_runner.go:130] > # additional_devices = [
	I1208 18:30:09.715146  429920 command_runner.go:130] > # ]
	I1208 18:30:09.715153  429920 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 18:30:09.715163  429920 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 18:30:09.715173  429920 command_runner.go:130] > # 	"/etc/cdi",
	I1208 18:30:09.715182  429920 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 18:30:09.715191  429920 command_runner.go:130] > # ]
	I1208 18:30:09.715204  429920 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 18:30:09.715216  429920 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 18:30:09.715228  429920 command_runner.go:130] > # Defaults to false.
	I1208 18:30:09.715237  429920 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 18:30:09.715251  429920 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 18:30:09.715264  429920 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 18:30:09.715274  429920 command_runner.go:130] > # hooks_dir = [
	I1208 18:30:09.715284  429920 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 18:30:09.715293  429920 command_runner.go:130] > # ]
	I1208 18:30:09.715306  429920 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 18:30:09.715317  429920 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 18:30:09.715327  429920 command_runner.go:130] > # its default mounts from the following two files:
	I1208 18:30:09.715335  429920 command_runner.go:130] > #
	I1208 18:30:09.715353  429920 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 18:30:09.715366  429920 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 18:30:09.715379  429920 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 18:30:09.715387  429920 command_runner.go:130] > #
	I1208 18:30:09.715398  429920 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 18:30:09.715411  429920 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 18:30:09.715425  429920 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 18:30:09.715441  429920 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 18:30:09.715449  429920 command_runner.go:130] > #
	I1208 18:30:09.715460  429920 command_runner.go:130] > # default_mounts_file = ""
	I1208 18:30:09.715469  429920 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 18:30:09.715482  429920 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 18:30:09.715489  429920 command_runner.go:130] > # pids_limit = 0
	I1208 18:30:09.715498  429920 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 18:30:09.715512  429920 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 18:30:09.715530  429920 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 18:30:09.715546  429920 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 18:30:09.715555  429920 command_runner.go:130] > # log_size_max = -1
	I1208 18:30:09.715569  429920 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1208 18:30:09.715575  429920 command_runner.go:130] > # log_to_journald = false
	I1208 18:30:09.715584  429920 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 18:30:09.715596  429920 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 18:30:09.715608  429920 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 18:30:09.715619  429920 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 18:30:09.715631  429920 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 18:30:09.715644  429920 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 18:30:09.715655  429920 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 18:30:09.715662  429920 command_runner.go:130] > # read_only = false
	I1208 18:30:09.715671  429920 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 18:30:09.715685  429920 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 18:30:09.715696  429920 command_runner.go:130] > # live configuration reload.
	I1208 18:30:09.715706  429920 command_runner.go:130] > # log_level = "info"
	I1208 18:30:09.715719  429920 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 18:30:09.715730  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:30:09.715739  429920 command_runner.go:130] > # log_filter = ""
	I1208 18:30:09.715748  429920 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 18:30:09.715760  429920 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 18:30:09.715770  429920 command_runner.go:130] > # separated by comma.
	I1208 18:30:09.715780  429920 command_runner.go:130] > # uid_mappings = ""
	I1208 18:30:09.715790  429920 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 18:30:09.715803  429920 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 18:30:09.715813  429920 command_runner.go:130] > # separated by comma.
	I1208 18:30:09.715823  429920 command_runner.go:130] > # gid_mappings = ""
	I1208 18:30:09.715835  429920 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 18:30:09.715847  429920 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 18:30:09.715861  429920 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 18:30:09.715872  429920 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 18:30:09.715886  429920 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 18:30:09.715899  429920 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 18:30:09.715912  429920 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 18:30:09.715919  429920 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 18:30:09.715927  429920 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 18:30:09.715939  429920 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 18:30:09.715953  429920 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 18:30:09.715963  429920 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 18:30:09.715973  429920 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 18:30:09.715991  429920 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 18:30:09.716001  429920 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 18:30:09.716009  429920 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 18:30:09.716015  429920 command_runner.go:130] > # drop_infra_ctr = true
	I1208 18:30:09.716029  429920 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 18:30:09.716045  429920 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 18:30:09.716059  429920 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 18:30:09.716069  429920 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 18:30:09.716078  429920 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 18:30:09.716088  429920 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 18:30:09.716095  429920 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 18:30:09.716105  429920 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 18:30:09.716116  429920 command_runner.go:130] > # pinns_path = ""
	I1208 18:30:09.716129  429920 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 18:30:09.716143  429920 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1208 18:30:09.716156  429920 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1208 18:30:09.716167  429920 command_runner.go:130] > # default_runtime = "runc"
	I1208 18:30:09.716175  429920 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 18:30:09.716187  429920 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 18:30:09.716204  429920 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1208 18:30:09.716216  429920 command_runner.go:130] > # creation as a file is not desired either.
	I1208 18:30:09.716232  429920 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 18:30:09.716243  429920 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 18:30:09.716258  429920 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 18:30:09.716264  429920 command_runner.go:130] > # ]
	I1208 18:30:09.716273  429920 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 18:30:09.716287  429920 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 18:30:09.716301  429920 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1208 18:30:09.716315  429920 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1208 18:30:09.716323  429920 command_runner.go:130] > #
	I1208 18:30:09.716332  429920 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1208 18:30:09.716346  429920 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1208 18:30:09.716353  429920 command_runner.go:130] > #  runtime_type = "oci"
	I1208 18:30:09.716361  429920 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1208 18:30:09.716372  429920 command_runner.go:130] > #  privileged_without_host_devices = false
	I1208 18:30:09.716387  429920 command_runner.go:130] > #  allowed_annotations = []
	I1208 18:30:09.716396  429920 command_runner.go:130] > # Where:
	I1208 18:30:09.716408  429920 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1208 18:30:09.716421  429920 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1208 18:30:09.716432  429920 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 18:30:09.716442  429920 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 18:30:09.716454  429920 command_runner.go:130] > #   in $PATH.
	I1208 18:30:09.716469  429920 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1208 18:30:09.716480  429920 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 18:30:09.716494  429920 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1208 18:30:09.716503  429920 command_runner.go:130] > #   state.
	I1208 18:30:09.716519  429920 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 18:30:09.716529  429920 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 18:30:09.716544  429920 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 18:30:09.716556  429920 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 18:30:09.716569  429920 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 18:30:09.716583  429920 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 18:30:09.716594  429920 command_runner.go:130] > #   The currently recognized values are:
	I1208 18:30:09.716604  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 18:30:09.716616  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 18:30:09.716630  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 18:30:09.716643  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 18:30:09.716658  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 18:30:09.716672  429920 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 18:30:09.716687  429920 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 18:30:09.716696  429920 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1208 18:30:09.716707  429920 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 18:30:09.716718  429920 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 18:30:09.716730  429920 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1208 18:30:09.716741  429920 command_runner.go:130] > runtime_type = "oci"
	I1208 18:30:09.716751  429920 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 18:30:09.716758  429920 command_runner.go:130] > runtime_config_path = ""
	I1208 18:30:09.716768  429920 command_runner.go:130] > monitor_path = ""
	I1208 18:30:09.716776  429920 command_runner.go:130] > monitor_cgroup = ""
	I1208 18:30:09.716780  429920 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 18:30:09.716843  429920 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1208 18:30:09.716854  429920 command_runner.go:130] > # running containers
	I1208 18:30:09.716861  429920 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1208 18:30:09.716871  429920 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1208 18:30:09.716884  429920 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1208 18:30:09.716897  429920 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1208 18:30:09.716914  429920 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1208 18:30:09.716927  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1208 18:30:09.716938  429920 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1208 18:30:09.716948  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1208 18:30:09.716955  429920 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1208 18:30:09.716966  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1208 18:30:09.716980  429920 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 18:30:09.716992  429920 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 18:30:09.717005  429920 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 18:30:09.717021  429920 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 18:30:09.717033  429920 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1208 18:30:09.717042  429920 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 18:30:09.717060  429920 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 18:30:09.717076  429920 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 18:30:09.717088  429920 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 18:30:09.717102  429920 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 18:30:09.717111  429920 command_runner.go:130] > # Example:
	I1208 18:30:09.717120  429920 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 18:30:09.717129  429920 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 18:30:09.717146  429920 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 18:30:09.717158  429920 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 18:30:09.717167  429920 command_runner.go:130] > # cpuset = 0
	I1208 18:30:09.717177  429920 command_runner.go:130] > # cpushares = "0-1"
	I1208 18:30:09.717187  429920 command_runner.go:130] > # Where:
	I1208 18:30:09.717198  429920 command_runner.go:130] > # The workload name is workload-type.
	I1208 18:30:09.717208  429920 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 18:30:09.717220  429920 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 18:30:09.717234  429920 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 18:30:09.717249  429920 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 18:30:09.717262  429920 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 18:30:09.717270  429920 command_runner.go:130] > # 
	I1208 18:30:09.717281  429920 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 18:30:09.717288  429920 command_runner.go:130] > #
	I1208 18:30:09.717294  429920 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 18:30:09.717307  429920 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1208 18:30:09.717322  429920 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1208 18:30:09.717335  429920 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1208 18:30:09.717353  429920 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1208 18:30:09.717363  429920 command_runner.go:130] > [crio.image]
	I1208 18:30:09.717373  429920 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 18:30:09.717380  429920 command_runner.go:130] > # default_transport = "docker://"
	I1208 18:30:09.717386  429920 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 18:30:09.717397  429920 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 18:30:09.717405  429920 command_runner.go:130] > # global_auth_file = ""
	I1208 18:30:09.717417  429920 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 18:30:09.717429  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:30:09.717441  429920 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1208 18:30:09.717454  429920 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 18:30:09.717467  429920 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 18:30:09.717478  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:30:09.717485  429920 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 18:30:09.717490  429920 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 18:30:09.717498  429920 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 18:30:09.717507  429920 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 18:30:09.717512  429920 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 18:30:09.717522  429920 command_runner.go:130] > # pause_command = "/pause"
	I1208 18:30:09.717530  429920 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 18:30:09.717537  429920 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 18:30:09.717545  429920 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 18:30:09.717554  429920 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 18:30:09.717567  429920 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 18:30:09.717581  429920 command_runner.go:130] > # signature_policy = ""
	I1208 18:30:09.717600  429920 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 18:30:09.717613  429920 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 18:30:09.717622  429920 command_runner.go:130] > # changing them here.
	I1208 18:30:09.717628  429920 command_runner.go:130] > # insecure_registries = [
	I1208 18:30:09.717633  429920 command_runner.go:130] > # ]
	I1208 18:30:09.717640  429920 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 18:30:09.717647  429920 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 18:30:09.717651  429920 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 18:30:09.717658  429920 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 18:30:09.717663  429920 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 18:30:09.717671  429920 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 18:30:09.717679  429920 command_runner.go:130] > # CNI plugins.
	I1208 18:30:09.717685  429920 command_runner.go:130] > [crio.network]
	I1208 18:30:09.717692  429920 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 18:30:09.717699  429920 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 18:30:09.717704  429920 command_runner.go:130] > # cni_default_network = ""
	I1208 18:30:09.717709  429920 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 18:30:09.717715  429920 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 18:30:09.717721  429920 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 18:30:09.717727  429920 command_runner.go:130] > # plugin_dirs = [
	I1208 18:30:09.717731  429920 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 18:30:09.717737  429920 command_runner.go:130] > # ]
	I1208 18:30:09.717742  429920 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 18:30:09.717748  429920 command_runner.go:130] > [crio.metrics]
	I1208 18:30:09.717754  429920 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 18:30:09.717762  429920 command_runner.go:130] > # enable_metrics = false
	I1208 18:30:09.717769  429920 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 18:30:09.717774  429920 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 18:30:09.717787  429920 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 18:30:09.717804  429920 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 18:30:09.717812  429920 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 18:30:09.717819  429920 command_runner.go:130] > # metrics_collectors = [
	I1208 18:30:09.717823  429920 command_runner.go:130] > # 	"operations",
	I1208 18:30:09.717830  429920 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1208 18:30:09.717834  429920 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1208 18:30:09.717841  429920 command_runner.go:130] > # 	"operations_errors",
	I1208 18:30:09.717846  429920 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1208 18:30:09.717852  429920 command_runner.go:130] > # 	"image_pulls_by_name",
	I1208 18:30:09.717856  429920 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1208 18:30:09.717863  429920 command_runner.go:130] > # 	"image_pulls_failures",
	I1208 18:30:09.717867  429920 command_runner.go:130] > # 	"image_pulls_successes",
	I1208 18:30:09.717871  429920 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 18:30:09.717877  429920 command_runner.go:130] > # 	"image_layer_reuse",
	I1208 18:30:09.717881  429920 command_runner.go:130] > # 	"containers_oom_total",
	I1208 18:30:09.717887  429920 command_runner.go:130] > # 	"containers_oom",
	I1208 18:30:09.717892  429920 command_runner.go:130] > # 	"processes_defunct",
	I1208 18:30:09.717897  429920 command_runner.go:130] > # 	"operations_total",
	I1208 18:30:09.717904  429920 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 18:30:09.717911  429920 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 18:30:09.717916  429920 command_runner.go:130] > # 	"operations_errors_total",
	I1208 18:30:09.717922  429920 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 18:30:09.717927  429920 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 18:30:09.717934  429920 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 18:30:09.717938  429920 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 18:30:09.717944  429920 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 18:30:09.717949  429920 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 18:30:09.717954  429920 command_runner.go:130] > # ]
	I1208 18:30:09.717959  429920 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 18:30:09.717966  429920 command_runner.go:130] > # metrics_port = 9090
	I1208 18:30:09.717971  429920 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 18:30:09.717978  429920 command_runner.go:130] > # metrics_socket = ""
	I1208 18:30:09.717983  429920 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 18:30:09.717991  429920 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 18:30:09.717997  429920 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 18:30:09.718004  429920 command_runner.go:130] > # certificate on any modification event.
	I1208 18:30:09.718013  429920 command_runner.go:130] > # metrics_cert = ""
	I1208 18:30:09.718021  429920 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 18:30:09.718026  429920 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 18:30:09.718032  429920 command_runner.go:130] > # metrics_key = ""
	I1208 18:30:09.718038  429920 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 18:30:09.718043  429920 command_runner.go:130] > [crio.tracing]
	I1208 18:30:09.718049  429920 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 18:30:09.718055  429920 command_runner.go:130] > # enable_tracing = false
	I1208 18:30:09.718061  429920 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 18:30:09.718067  429920 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1208 18:30:09.718073  429920 command_runner.go:130] > # Number of samples to collect per million spans.
	I1208 18:30:09.718081  429920 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 18:30:09.718089  429920 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 18:30:09.718095  429920 command_runner.go:130] > [crio.stats]
	I1208 18:30:09.718101  429920 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 18:30:09.718108  429920 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 18:30:09.718115  429920 command_runner.go:130] > # stats_collection_period = 0
	I1208 18:30:09.718196  429920 cni.go:84] Creating CNI manager for ""
	I1208 18:30:09.718213  429920 cni.go:136] 1 nodes found, recommending kindnet
	I1208 18:30:09.718231  429920 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1208 18:30:09.718255  429920 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985452 NodeName:multinode-985452 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 18:30:09.718376  429920 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-985452"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 18:30:09.718445  429920 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-985452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1208 18:30:09.718518  429920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1208 18:30:09.726017  429920 command_runner.go:130] > kubeadm
	I1208 18:30:09.726040  429920 command_runner.go:130] > kubectl
	I1208 18:30:09.726047  429920 command_runner.go:130] > kubelet
	I1208 18:30:09.726734  429920 binaries.go:44] Found k8s binaries, skipping transfer
	I1208 18:30:09.726801  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 18:30:09.734596  429920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1208 18:30:09.749642  429920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 18:30:09.764967  429920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1208 18:30:09.779798  429920 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1208 18:30:09.782700  429920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:30:09.791969  429920 certs.go:56] Setting up /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452 for IP: 192.168.58.2
	I1208 18:30:09.792001  429920 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5abf3d3db90d2494e2d623a52fec5b2843f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:09.792172  429920 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key
	I1208 18:30:09.792235  429920 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key
	I1208 18:30:09.792301  429920 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key
	I1208 18:30:09.792318  429920 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt with IP's: []
	I1208 18:30:09.970363  429920 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt ...
	I1208 18:30:09.970399  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt: {Name:mkf3a2ce32566bc82c436c480b62576c9e27109c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:09.970624  429920 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key ...
	I1208 18:30:09.970640  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key: {Name:mk8f92f830502809aa43433195eec9c01ae6960e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:09.970747  429920 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key.cee25041
	I1208 18:30:09.970763  429920 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1208 18:30:10.136346  429920 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt.cee25041 ...
	I1208 18:30:10.136382  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt.cee25041: {Name:mk67f02f700f56f0d7f7e53a6f4240da6ac53bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:10.136571  429920 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key.cee25041 ...
	I1208 18:30:10.136589  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key.cee25041: {Name:mkb7a1d3fa35642e8a9ddfe2479de305a7a8ed6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:10.136728  429920 certs.go:337] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt
	I1208 18:30:10.136803  429920 certs.go:341] copying /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key
	I1208 18:30:10.136866  429920 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.key
	I1208 18:30:10.136882  429920 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.crt with IP's: []
	I1208 18:30:10.421718  429920 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.crt ...
	I1208 18:30:10.421758  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.crt: {Name:mka8e7f3dce138aea731079924c1a48ad3052104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:10.421946  429920 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.key ...
	I1208 18:30:10.421961  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.key: {Name:mk3b6dca67fcf71a44cfe85feaa7762529e137d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:10.422129  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 18:30:10.422167  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 18:30:10.422178  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 18:30:10.422188  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 18:30:10.422204  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 18:30:10.422215  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 18:30:10.422226  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 18:30:10.422238  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 18:30:10.422296  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem (1338 bytes)
	W1208 18:30:10.422370  429920 certs.go:433] ignoring /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628_empty.pem, impossibly tiny 0 bytes
	I1208 18:30:10.422384  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 18:30:10.422406  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem (1082 bytes)
	I1208 18:30:10.422430  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem (1123 bytes)
	I1208 18:30:10.422472  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem (1679 bytes)
	I1208 18:30:10.422515  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:30:10.422542  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /usr/share/ca-certificates/3436282.pem
	I1208 18:30:10.422556  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:30:10.422567  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem -> /usr/share/ca-certificates/343628.pem
	I1208 18:30:10.423235  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1208 18:30:10.446314  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 18:30:10.468341  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 18:30:10.490259  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 18:30:10.511373  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 18:30:10.532419  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 18:30:10.552933  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 18:30:10.573580  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 18:30:10.593777  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /usr/share/ca-certificates/3436282.pem (1708 bytes)
	I1208 18:30:10.614328  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 18:30:10.634806  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem --> /usr/share/ca-certificates/343628.pem (1338 bytes)
	I1208 18:30:10.654986  429920 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 18:30:10.672302  429920 ssh_runner.go:195] Run: openssl version
	I1208 18:30:10.676983  429920 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1208 18:30:10.677060  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1208 18:30:10.685736  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:30:10.688801  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:30:10.688830  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:30:10.688876  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:30:10.695028  429920 command_runner.go:130] > b5213941
	I1208 18:30:10.695279  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1208 18:30:10.704243  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/343628.pem && ln -fs /usr/share/ca-certificates/343628.pem /etc/ssl/certs/343628.pem"
	I1208 18:30:10.712568  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/343628.pem
	I1208 18:30:10.715980  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 18:17 /usr/share/ca-certificates/343628.pem
	I1208 18:30:10.716013  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  8 18:17 /usr/share/ca-certificates/343628.pem
	I1208 18:30:10.716042  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/343628.pem
	I1208 18:30:10.722157  429920 command_runner.go:130] > 51391683
	I1208 18:30:10.722215  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/343628.pem /etc/ssl/certs/51391683.0"
	I1208 18:30:10.730231  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3436282.pem && ln -fs /usr/share/ca-certificates/3436282.pem /etc/ssl/certs/3436282.pem"
	I1208 18:30:10.738296  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3436282.pem
	I1208 18:30:10.741296  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 18:17 /usr/share/ca-certificates/3436282.pem
	I1208 18:30:10.741336  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  8 18:17 /usr/share/ca-certificates/3436282.pem
	I1208 18:30:10.741382  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3436282.pem
	I1208 18:30:10.747559  429920 command_runner.go:130] > 3ec20f2e
	I1208 18:30:10.747815  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3436282.pem /etc/ssl/certs/3ec20f2e.0"
	I1208 18:30:10.755741  429920 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1208 18:30:10.758578  429920 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:30:10.758611  429920 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:30:10.758651  429920 kubeadm.go:404] StartCluster: {Name:multinode-985452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:30:10.758718  429920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 18:30:10.758752  429920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 18:30:10.791345  429920 cri.go:89] found id: ""
	I1208 18:30:10.791436  429920 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 18:30:10.799275  429920 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1208 18:30:10.799300  429920 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1208 18:30:10.799309  429920 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1208 18:30:10.799384  429920 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 18:30:10.806910  429920 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1208 18:30:10.806959  429920 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 18:30:10.814123  429920 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1208 18:30:10.814160  429920 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1208 18:30:10.814172  429920 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1208 18:30:10.814183  429920 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 18:30:10.814231  429920 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 18:30:10.814268  429920 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 18:30:10.856680  429920 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1208 18:30:10.856714  429920 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1208 18:30:10.856764  429920 kubeadm.go:322] [preflight] Running pre-flight checks
	I1208 18:30:10.856775  429920 command_runner.go:130] > [preflight] Running pre-flight checks
	I1208 18:30:10.893064  429920 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:30:10.893081  429920 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:30:10.893194  429920 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:30:10.893211  429920 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:30:10.893255  429920 kubeadm.go:322] OS: Linux
	I1208 18:30:10.893266  429920 command_runner.go:130] > OS: Linux
	I1208 18:30:10.893322  429920 kubeadm.go:322] CGROUPS_CPU: enabled
	I1208 18:30:10.893333  429920 command_runner.go:130] > CGROUPS_CPU: enabled
	I1208 18:30:10.893391  429920 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1208 18:30:10.893410  429920 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1208 18:30:10.893480  429920 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1208 18:30:10.893491  429920 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1208 18:30:10.893557  429920 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1208 18:30:10.893568  429920 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1208 18:30:10.893642  429920 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1208 18:30:10.893653  429920 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1208 18:30:10.893737  429920 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1208 18:30:10.893747  429920 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1208 18:30:10.893825  429920 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1208 18:30:10.893848  429920 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1208 18:30:10.893888  429920 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1208 18:30:10.893895  429920 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1208 18:30:10.893952  429920 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1208 18:30:10.893960  429920 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1208 18:30:10.956822  429920 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 18:30:10.956870  429920 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 18:30:10.957009  429920 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 18:30:10.957027  429920 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 18:30:10.957151  429920 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 18:30:10.957171  429920 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1208 18:30:11.145395  429920 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 18:30:11.145428  429920 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 18:30:11.149505  429920 out.go:204]   - Generating certificates and keys ...
	I1208 18:30:11.149598  429920 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1208 18:30:11.149639  429920 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1208 18:30:11.149740  429920 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1208 18:30:11.149754  429920 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1208 18:30:11.314877  429920 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 18:30:11.314920  429920 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 18:30:11.600997  429920 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1208 18:30:11.601030  429920 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1208 18:30:11.817855  429920 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1208 18:30:11.817890  429920 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1208 18:30:11.900864  429920 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1208 18:30:11.900901  429920 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1208 18:30:11.946551  429920 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1208 18:30:11.946584  429920 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1208 18:30:11.946738  429920 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985452] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1208 18:30:11.946771  429920 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985452] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1208 18:30:12.143436  429920 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1208 18:30:12.143490  429920 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1208 18:30:12.143678  429920 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985452] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1208 18:30:12.143695  429920 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985452] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1208 18:30:12.220748  429920 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 18:30:12.220777  429920 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 18:30:12.366385  429920 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 18:30:12.366429  429920 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 18:30:12.567005  429920 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1208 18:30:12.567034  429920 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1208 18:30:12.567140  429920 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 18:30:12.567151  429920 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 18:30:12.863797  429920 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 18:30:12.863826  429920 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 18:30:13.024477  429920 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 18:30:13.024523  429920 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 18:30:13.194169  429920 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 18:30:13.194200  429920 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 18:30:13.281983  429920 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 18:30:13.282018  429920 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 18:30:13.282476  429920 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 18:30:13.282499  429920 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 18:30:13.284634  429920 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 18:30:13.286826  429920 out.go:204]   - Booting up control plane ...
	I1208 18:30:13.284757  429920 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 18:30:13.286947  429920 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 18:30:13.287002  429920 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 18:30:13.287121  429920 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 18:30:13.287135  429920 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 18:30:13.287211  429920 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 18:30:13.287223  429920 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 18:30:13.295240  429920 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:30:13.295264  429920 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:30:13.295961  429920 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:30:13.295981  429920 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:30:13.296044  429920 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1208 18:30:13.296058  429920 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1208 18:30:13.370163  429920 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 18:30:13.370187  429920 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1208 18:30:18.372856  429920 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002736 seconds
	I1208 18:30:18.372891  429920 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002736 seconds
	I1208 18:30:18.373018  429920 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 18:30:18.373028  429920 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 18:30:18.385612  429920 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 18:30:18.385646  429920 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 18:30:18.906139  429920 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 18:30:18.906176  429920 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1208 18:30:18.906438  429920 kubeadm.go:322] [mark-control-plane] Marking the node multinode-985452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 18:30:18.906483  429920 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 18:30:19.416913  429920 kubeadm.go:322] [bootstrap-token] Using token: 3418jd.hujn632ksl5eez1r
	I1208 18:30:19.418497  429920 out.go:204]   - Configuring RBAC rules ...
	I1208 18:30:19.416961  429920 command_runner.go:130] > [bootstrap-token] Using token: 3418jd.hujn632ksl5eez1r
	I1208 18:30:19.418675  429920 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 18:30:19.418698  429920 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 18:30:19.422183  429920 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 18:30:19.422203  429920 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 18:30:19.429520  429920 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 18:30:19.429548  429920 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 18:30:19.432137  429920 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 18:30:19.432164  429920 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 18:30:19.434721  429920 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 18:30:19.434743  429920 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 18:30:19.437377  429920 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 18:30:19.437397  429920 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 18:30:19.446357  429920 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 18:30:19.446376  429920 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 18:30:19.684694  429920 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1208 18:30:19.684726  429920 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1208 18:30:19.827216  429920 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1208 18:30:19.827251  429920 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1208 18:30:19.829183  429920 kubeadm.go:322] 
	I1208 18:30:19.829295  429920 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1208 18:30:19.829316  429920 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1208 18:30:19.829356  429920 kubeadm.go:322] 
	I1208 18:30:19.829468  429920 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1208 18:30:19.829488  429920 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1208 18:30:19.829494  429920 kubeadm.go:322] 
	I1208 18:30:19.829524  429920 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1208 18:30:19.829533  429920 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1208 18:30:19.829600  429920 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 18:30:19.829619  429920 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 18:30:19.829711  429920 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 18:30:19.829720  429920 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 18:30:19.829725  429920 kubeadm.go:322] 
	I1208 18:30:19.829787  429920 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1208 18:30:19.829797  429920 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1208 18:30:19.829802  429920 kubeadm.go:322] 
	I1208 18:30:19.829886  429920 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 18:30:19.829924  429920 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 18:30:19.829936  429920 kubeadm.go:322] 
	I1208 18:30:19.830015  429920 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1208 18:30:19.830025  429920 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1208 18:30:19.830137  429920 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 18:30:19.830162  429920 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 18:30:19.830269  429920 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 18:30:19.830282  429920 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 18:30:19.830287  429920 kubeadm.go:322] 
	I1208 18:30:19.830393  429920 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 18:30:19.830411  429920 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1208 18:30:19.830532  429920 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1208 18:30:19.830552  429920 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1208 18:30:19.830569  429920 kubeadm.go:322] 
	I1208 18:30:19.830677  429920 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3418jd.hujn632ksl5eez1r \
	I1208 18:30:19.830701  429920 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 3418jd.hujn632ksl5eez1r \
	I1208 18:30:19.830866  429920 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 \
	I1208 18:30:19.830883  429920 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 \
	I1208 18:30:19.830908  429920 kubeadm.go:322] 	--control-plane 
	I1208 18:30:19.830923  429920 command_runner.go:130] > 	--control-plane 
	I1208 18:30:19.830929  429920 kubeadm.go:322] 
	I1208 18:30:19.831053  429920 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1208 18:30:19.831093  429920 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1208 18:30:19.831135  429920 kubeadm.go:322] 
	I1208 18:30:19.831254  429920 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3418jd.hujn632ksl5eez1r \
	I1208 18:30:19.831266  429920 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3418jd.hujn632ksl5eez1r \
	I1208 18:30:19.831415  429920 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:30:19.831427  429920 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:30:19.834012  429920 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:30:19.834037  429920 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:30:19.834166  429920 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:30:19.834180  429920 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:30:19.834200  429920 cni.go:84] Creating CNI manager for ""
	I1208 18:30:19.834212  429920 cni.go:136] 1 nodes found, recommending kindnet
	I1208 18:30:19.836614  429920 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1208 18:30:19.838163  429920 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 18:30:19.842617  429920 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1208 18:30:19.842646  429920 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1208 18:30:19.842657  429920 command_runner.go:130] > Device: 34h/52d	Inode: 1303407     Links: 1
	I1208 18:30:19.842666  429920 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:30:19.842676  429920 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1208 18:30:19.842690  429920 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1208 18:30:19.842715  429920 command_runner.go:130] > Change: 2023-12-08 18:10:37.800699976 +0000
	I1208 18:30:19.842728  429920 command_runner.go:130] >  Birth: 2023-12-08 18:10:37.776697531 +0000
	I1208 18:30:19.842818  429920 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1208 18:30:19.842834  429920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1208 18:30:19.861272  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 18:30:20.542147  429920 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1208 18:30:20.548285  429920 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1208 18:30:20.555007  429920 command_runner.go:130] > serviceaccount/kindnet created
	I1208 18:30:20.563534  429920 command_runner.go:130] > daemonset.apps/kindnet created
	I1208 18:30:20.567725  429920 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 18:30:20.567810  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:20.567879  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b minikube.k8s.io/name=multinode-985452 minikube.k8s.io/updated_at=2023_12_08T18_30_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:20.639169  429920 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1208 18:30:20.642969  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:20.648549  429920 command_runner.go:130] > node/multinode-985452 labeled
	I1208 18:30:20.652028  429920 command_runner.go:130] > -16
	I1208 18:30:20.652067  429920 ops.go:34] apiserver oom_adj: -16
	I1208 18:30:20.729350  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:20.729452  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:20.794569  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:21.295375  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:21.357024  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:21.795501  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:21.856091  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:22.295666  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:22.357245  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:22.795741  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:22.856101  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:23.294914  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:23.357991  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:23.795658  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:23.859741  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:24.295369  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:24.359923  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:24.795595  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:24.858624  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:25.295146  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:25.359434  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:25.795755  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:25.863661  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:26.295177  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:26.354380  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:26.795718  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:26.860042  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:27.295724  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:27.359734  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:27.795817  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:27.858522  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:28.295088  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:28.358471  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:28.795105  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:28.856757  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:29.295720  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:29.358244  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:29.795264  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:29.859235  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:30.295643  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:30.359514  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:30.795363  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:30.860670  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:31.295261  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:31.367369  429920 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1208 18:30:31.794854  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:30:31.859911  429920 command_runner.go:130] > NAME      SECRETS   AGE
	I1208 18:30:31.859933  429920 command_runner.go:130] > default   0         0s
	I1208 18:30:31.862585  429920 kubeadm.go:1088] duration metric: took 11.294845408s to wait for elevateKubeSystemPrivileges.
	I1208 18:30:31.862622  429920 kubeadm.go:406] StartCluster complete in 21.103973763s
	I1208 18:30:31.862686  429920 settings.go:142] acquiring lock: {Name:mkb1d8fbfd540ec0ff42a4ec77782a6addbbad21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:31.862770  429920 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:30:31.863730  429920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/kubeconfig: {Name:mk170d1df5bab3a276f3bc17a718825dd5b16d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:30:31.863985  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 18:30:31.864132  429920 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1208 18:30:31.864210  429920 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:30:31.864215  429920 addons.go:69] Setting storage-provisioner=true in profile "multinode-985452"
	I1208 18:30:31.864239  429920 addons.go:69] Setting default-storageclass=true in profile "multinode-985452"
	I1208 18:30:31.864297  429920 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985452"
	I1208 18:30:31.864366  429920 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:30:31.864244  429920 addons.go:231] Setting addon storage-provisioner=true in "multinode-985452"
	I1208 18:30:31.864468  429920 host.go:66] Checking if "multinode-985452" exists ...
	I1208 18:30:31.864723  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:31.864668  429920 kapi.go:59] client config for multinode-985452: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:30:31.864909  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:31.865430  429920 cert_rotation.go:137] Starting client certificate rotation controller
	I1208 18:30:31.865739  429920 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1208 18:30:31.865755  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:31.865763  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:31.865772  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:31.876595  429920 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1208 18:30:31.876628  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:31.876639  429920 round_trippers.go:580]     Audit-Id: 8cd85b13-5cbf-4afd-abde-9e9a53a4fc2c
	I1208 18:30:31.876647  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:31.876655  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:31.876663  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:31.876672  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:31.876682  429920 round_trippers.go:580]     Content-Length: 291
	I1208 18:30:31.876695  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:31 GMT
	I1208 18:30:31.876747  429920 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9cad5cc5-6d14-4fb9-8d70-bbd3db2a56bf","resourceVersion":"270","creationTimestamp":"2023-12-08T18:30:19Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1208 18:30:31.877269  429920 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9cad5cc5-6d14-4fb9-8d70-bbd3db2a56bf","resourceVersion":"270","creationTimestamp":"2023-12-08T18:30:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1208 18:30:31.877348  429920 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1208 18:30:31.877358  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:31.877368  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:31.877377  429920 round_trippers.go:473]     Content-Type: application/json
	I1208 18:30:31.877386  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:31.884578  429920 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1208 18:30:31.884604  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:31.884616  429920 round_trippers.go:580]     Content-Length: 291
	I1208 18:30:31.884623  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:31 GMT
	I1208 18:30:31.884630  429920 round_trippers.go:580]     Audit-Id: 578dfb14-c6a5-47d5-ae34-4bccc3f5cdd0
	I1208 18:30:31.884637  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:31.884646  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:31.884659  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:31.884672  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:31.884728  429920 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9cad5cc5-6d14-4fb9-8d70-bbd3db2a56bf","resourceVersion":"336","creationTimestamp":"2023-12-08T18:30:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1208 18:30:31.884936  429920 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1208 18:30:31.884954  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:31.884965  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:31.884975  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:31.885645  429920 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:30:31.885928  429920 kapi.go:59] client config for multinode-985452: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:30:31.886241  429920 addons.go:231] Setting addon default-storageclass=true in "multinode-985452"
	I1208 18:30:31.886307  429920 host.go:66] Checking if "multinode-985452" exists ...
	I1208 18:30:31.886839  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:30:31.886863  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:31.886872  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:31.886880  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:31.886888  429920 round_trippers.go:580]     Content-Length: 291
	I1208 18:30:31.886897  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:31 GMT
	I1208 18:30:31.886907  429920 round_trippers.go:580]     Audit-Id: d2a4e819-d390-48df-b8c6-f6875ea798ff
	I1208 18:30:31.886917  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:31.886930  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:31.887018  429920 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:30:31.887123  429920 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9cad5cc5-6d14-4fb9-8d70-bbd3db2a56bf","resourceVersion":"336","creationTimestamp":"2023-12-08T18:30:19Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1208 18:30:31.887246  429920 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-985452" context rescaled to 1 replicas
	I1208 18:30:31.887281  429920 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 18:30:31.889116  429920 out.go:177] * Verifying Kubernetes components...
	I1208 18:30:31.890519  429920 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 18:30:31.892000  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:30:31.892092  429920 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:30:31.892117  429920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 18:30:31.892183  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:31.904240  429920 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 18:30:31.904265  429920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 18:30:31.904312  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:30:31.918011  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:31.930343  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:30:31.939085  429920 command_runner.go:130] > apiVersion: v1
	I1208 18:30:31.939107  429920 command_runner.go:130] > data:
	I1208 18:30:31.939114  429920 command_runner.go:130] >   Corefile: |
	I1208 18:30:31.939119  429920 command_runner.go:130] >     .:53 {
	I1208 18:30:31.939125  429920 command_runner.go:130] >         errors
	I1208 18:30:31.939132  429920 command_runner.go:130] >         health {
	I1208 18:30:31.939139  429920 command_runner.go:130] >            lameduck 5s
	I1208 18:30:31.939146  429920 command_runner.go:130] >         }
	I1208 18:30:31.939155  429920 command_runner.go:130] >         ready
	I1208 18:30:31.939176  429920 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1208 18:30:31.939185  429920 command_runner.go:130] >            pods insecure
	I1208 18:30:31.939198  429920 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1208 18:30:31.939210  429920 command_runner.go:130] >            ttl 30
	I1208 18:30:31.939220  429920 command_runner.go:130] >         }
	I1208 18:30:31.939230  429920 command_runner.go:130] >         prometheus :9153
	I1208 18:30:31.939241  429920 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1208 18:30:31.939253  429920 command_runner.go:130] >            max_concurrent 1000
	I1208 18:30:31.939263  429920 command_runner.go:130] >         }
	I1208 18:30:31.939272  429920 command_runner.go:130] >         cache 30
	I1208 18:30:31.939281  429920 command_runner.go:130] >         loop
	I1208 18:30:31.939289  429920 command_runner.go:130] >         reload
	I1208 18:30:31.939299  429920 command_runner.go:130] >         loadbalance
	I1208 18:30:31.939307  429920 command_runner.go:130] >     }
	I1208 18:30:31.939315  429920 command_runner.go:130] > kind: ConfigMap
	I1208 18:30:31.939325  429920 command_runner.go:130] > metadata:
	I1208 18:30:31.939339  429920 command_runner.go:130] >   creationTimestamp: "2023-12-08T18:30:19Z"
	I1208 18:30:31.939360  429920 command_runner.go:130] >   name: coredns
	I1208 18:30:31.939373  429920 command_runner.go:130] >   namespace: kube-system
	I1208 18:30:31.939380  429920 command_runner.go:130] >   resourceVersion: "266"
	I1208 18:30:31.939388  429920 command_runner.go:130] >   uid: c568c539-5f07-4892-be78-7ef362e03d8a
	I1208 18:30:31.942222  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 18:30:31.942624  429920 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:30:31.943087  429920 kapi.go:59] client config for multinode-985452: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:30:31.943479  429920 node_ready.go:35] waiting up to 6m0s for node "multinode-985452" to be "Ready" ...
	I1208 18:30:31.943596  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:31.943607  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:31.943617  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:31.943626  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:31.945756  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:31.945777  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:31.945786  429920 round_trippers.go:580]     Audit-Id: 90ef49dc-195a-46f2-b0cc-fcc81f10640f
	I1208 18:30:31.945794  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:31.945802  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:31.945815  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:31.945822  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:31.945829  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:31 GMT
	I1208 18:30:31.945964  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"335","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:3
0:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6146 chars]
	I1208 18:30:31.946782  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:31.946806  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:31.946818  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:31.946828  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:31.949001  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:31.949027  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:31.949038  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:31.949047  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:31.949060  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:31 GMT
	I1208 18:30:31.949068  429920 round_trippers.go:580]     Audit-Id: 3ecad3cc-9bd5-4911-aa11-c8225b814f7f
	I1208 18:30:31.949080  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:31.949092  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:31.949314  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"335","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:3
0:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6146 chars]
	I1208 18:30:32.038833  429920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 18:30:32.039357  429920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 18:30:32.450552  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:32.450575  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:32.450584  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:32.450590  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:32.453200  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:32.453232  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:32.453243  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:32.453252  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:32.453260  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:32 GMT
	I1208 18:30:32.453269  429920 round_trippers.go:580]     Audit-Id: 55ef1048-db29-4e18-b9d8-9ba9740b8a6e
	I1208 18:30:32.453277  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:32.453290  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:32.453447  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"335","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:3
0:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6146 chars]
	I1208 18:30:32.529246  429920 command_runner.go:130] > configmap/coredns replaced
	I1208 18:30:32.535718  429920 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1208 18:30:32.891533  429920 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1208 18:30:32.891570  429920 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1208 18:30:32.891581  429920 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1208 18:30:32.891591  429920 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1208 18:30:32.891599  429920 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1208 18:30:32.891606  429920 command_runner.go:130] > pod/storage-provisioner created
	I1208 18:30:32.891675  429920 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1208 18:30:32.891855  429920 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1208 18:30:32.891875  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:32.891887  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:32.891896  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:32.919267  429920 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1208 18:30:32.919294  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:32.919305  429920 round_trippers.go:580]     Audit-Id: ba22d7cb-35b6-47d1-8e83-f8e398b4d1c8
	I1208 18:30:32.919315  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:32.919324  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:32.919332  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:32.919345  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:32.919357  429920 round_trippers.go:580]     Content-Length: 1273
	I1208 18:30:32.919368  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:32 GMT
	I1208 18:30:32.919458  429920 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"5c41c10e-8e19-4523-9cab-2eff34233ee1","resourceVersion":"356","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1208 18:30:32.919860  429920 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5c41c10e-8e19-4523-9cab-2eff34233ee1","resourceVersion":"356","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1208 18:30:32.919918  429920 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1208 18:30:32.919929  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:32.919936  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:32.919942  429920 round_trippers.go:473]     Content-Type: application/json
	I1208 18:30:32.919952  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:32.922489  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:32.922511  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:32.922520  429920 round_trippers.go:580]     Audit-Id: 6506c2b8-a1db-4ccf-be77-4333f955c925
	I1208 18:30:32.922526  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:32.922532  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:32.922537  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:32.922542  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:32.922547  429920 round_trippers.go:580]     Content-Length: 1220
	I1208 18:30:32.922552  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:32 GMT
	I1208 18:30:32.922574  429920 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5c41c10e-8e19-4523-9cab-2eff34233ee1","resourceVersion":"356","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1208 18:30:32.924563  429920 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1208 18:30:32.925959  429920 addons.go:502] enable addons completed in 1.061828997s: enabled=[storage-provisioner default-storageclass]
	I1208 18:30:32.950792  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:32.950813  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:32.950821  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:32.950828  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:32.953270  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:32.953295  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:32.953306  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:32.953314  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:32.953320  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:32.953325  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:32.953332  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:32 GMT
	I1208 18:30:32.953339  429920 round_trippers.go:580]     Audit-Id: 62d32da1-c500-43d1-b450-64940a810dba
	I1208 18:30:32.953448  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:33.450046  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:33.450068  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:33.450077  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:33.450083  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:33.452356  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:33.452377  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:33.452385  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:33.452391  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:33.452397  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:33.452403  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:33.452408  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:33 GMT
	I1208 18:30:33.452413  429920 round_trippers.go:580]     Audit-Id: 5cdb17cd-f0a2-49ee-b870-be1c1c36b589
	I1208 18:30:33.452540  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:33.950026  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:33.950051  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:33.950059  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:33.950066  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:33.952468  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:33.952496  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:33.952503  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:33.952509  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:33.952514  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:33.952519  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:33 GMT
	I1208 18:30:33.952524  429920 round_trippers.go:580]     Audit-Id: c4310062-5e65-463f-9801-b7faaf40c9e3
	I1208 18:30:33.952529  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:33.952648  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:33.952984  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:34.450132  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:34.450153  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:34.450162  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:34.450168  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:34.452479  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:34.452504  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:34.452512  429920 round_trippers.go:580]     Audit-Id: 86df491a-ec67-4381-b290-2803ae7a4862
	I1208 18:30:34.452517  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:34.452523  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:34.452528  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:34.452533  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:34.452539  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:34 GMT
	I1208 18:30:34.452651  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:34.950301  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:34.950326  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:34.950334  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:34.950341  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:34.952647  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:34.952677  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:34.952688  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:34.952696  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:34.952705  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:34 GMT
	I1208 18:30:34.952714  429920 round_trippers.go:580]     Audit-Id: 64bc71d9-6c55-4fb9-9fcf-e334e961632d
	I1208 18:30:34.952721  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:34.952728  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:34.952848  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:35.450072  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:35.450096  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:35.450104  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:35.450110  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:35.452265  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:35.452285  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:35.452292  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:35.452298  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:35.452304  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:35.452312  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:35 GMT
	I1208 18:30:35.452320  429920 round_trippers.go:580]     Audit-Id: 8de2b162-b6a4-416a-b9d0-f7ba428e5fa1
	I1208 18:30:35.452328  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:35.452483  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:35.950007  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:35.950032  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:35.950041  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:35.950052  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:35.952331  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:35.952355  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:35.952365  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:35.952372  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:35 GMT
	I1208 18:30:35.952378  429920 round_trippers.go:580]     Audit-Id: fd3c54a7-0c0d-4f87-b1ed-304515103e61
	I1208 18:30:35.952386  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:35.952393  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:35.952402  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:35.952551  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:36.449927  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:36.449950  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:36.449959  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:36.449970  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:36.452025  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:36.452054  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:36.452066  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:36.452075  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:36 GMT
	I1208 18:30:36.452084  429920 round_trippers.go:580]     Audit-Id: e471688e-378c-4210-84bf-d95c0d4be496
	I1208 18:30:36.452092  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:36.452099  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:36.452137  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:36.452264  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:36.452613  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:36.950929  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:36.950951  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:36.950960  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:36.950966  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:36.953275  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:36.953295  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:36.953302  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:36 GMT
	I1208 18:30:36.953308  429920 round_trippers.go:580]     Audit-Id: 693d0554-038d-471c-a94d-9a01825d04aa
	I1208 18:30:36.953313  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:36.953318  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:36.953326  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:36.953339  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:36.953495  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:37.450070  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:37.450094  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:37.450103  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:37.450110  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:37.452458  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:37.452484  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:37.452495  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:37.452504  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:37.452513  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:37.452522  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:37 GMT
	I1208 18:30:37.452534  429920 round_trippers.go:580]     Audit-Id: bdfe745b-45de-49bc-a34f-7805a846de28
	I1208 18:30:37.452542  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:37.452687  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:37.950739  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:37.950765  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:37.950774  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:37.950782  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:37.952802  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:37.952824  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:37.952831  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:37.952836  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:37.952841  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:37.952847  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:37 GMT
	I1208 18:30:37.952852  429920 round_trippers.go:580]     Audit-Id: f0d14150-a9eb-4d1b-b1b1-bf063a363f1e
	I1208 18:30:37.952872  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:37.953076  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:38.450700  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:38.450724  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:38.450733  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:38.450738  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:38.453025  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:38.453047  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:38.453053  429920 round_trippers.go:580]     Audit-Id: 90a026c9-f9cb-4e9e-a3b5-1fbe64ce8a07
	I1208 18:30:38.453059  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:38.453064  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:38.453070  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:38.453075  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:38.453080  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:38 GMT
	I1208 18:30:38.453233  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:38.453627  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:38.949920  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:38.949944  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:38.949954  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:38.949963  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:38.952231  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:38.952255  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:38.952266  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:38 GMT
	I1208 18:30:38.952276  429920 round_trippers.go:580]     Audit-Id: 69113966-a559-439d-b4c0-fbc1a31b275a
	I1208 18:30:38.952284  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:38.952291  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:38.952300  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:38.952309  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:38.952517  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:39.449936  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:39.449966  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:39.449975  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:39.449981  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:39.452343  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:39.452364  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:39.452371  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:39 GMT
	I1208 18:30:39.452377  429920 round_trippers.go:580]     Audit-Id: 88293bc2-bf54-45a6-bb5c-49cfcca41759
	I1208 18:30:39.452382  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:39.452387  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:39.452392  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:39.452397  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:39.452505  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:39.950006  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:39.950032  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:39.950041  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:39.950047  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:39.952494  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:39.952519  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:39.952531  429920 round_trippers.go:580]     Audit-Id: 92029326-95a7-494d-9246-0713862982e3
	I1208 18:30:39.952540  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:39.952547  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:39.952552  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:39.952560  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:39.952565  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:39 GMT
	I1208 18:30:39.952671  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:40.449986  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:40.450017  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:40.450025  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:40.450032  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:40.452446  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:40.452467  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:40.452474  429920 round_trippers.go:580]     Audit-Id: bac457a5-29a7-4c32-9559-a8a72229ed27
	I1208 18:30:40.452480  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:40.452485  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:40.452490  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:40.452495  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:40.452502  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:40 GMT
	I1208 18:30:40.452668  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:40.950138  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:40.950166  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:40.950174  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:40.950180  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:40.952389  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:40.952414  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:40.952429  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:40.952439  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:40.952447  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:40.952456  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:40 GMT
	I1208 18:30:40.952464  429920 round_trippers.go:580]     Audit-Id: 906ef568-2791-46f2-a30c-c3c32f453010
	I1208 18:30:40.952469  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:40.952568  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:40.952914  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:41.450160  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:41.450187  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:41.450195  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:41.450201  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:41.452700  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:41.452725  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:41.452735  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:41.452741  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:41.452746  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:41.452752  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:41.452759  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:41 GMT
	I1208 18:30:41.452767  429920 round_trippers.go:580]     Audit-Id: f54a3b27-d550-41f5-9f97-44f46a2f2f48
	I1208 18:30:41.452874  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:41.950712  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:41.950761  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:41.950773  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:41.950781  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:41.953207  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:41.953228  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:41.953235  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:41 GMT
	I1208 18:30:41.953241  429920 round_trippers.go:580]     Audit-Id: 09257761-d9f8-4ecd-bb8e-5d6718e1e368
	I1208 18:30:41.953246  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:41.953251  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:41.953259  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:41.953267  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:41.953446  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:42.450043  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:42.450079  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:42.450091  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:42.450101  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:42.452251  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:42.452271  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:42.452277  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:42.452283  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:42.452290  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:42.452298  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:42.452306  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:42 GMT
	I1208 18:30:42.452317  429920 round_trippers.go:580]     Audit-Id: ba019ee4-8498-43a7-a6a2-10d513079826
	I1208 18:30:42.452422  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:42.950244  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:42.950268  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:42.950276  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:42.950282  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:42.952591  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:42.952616  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:42.952625  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:42.952633  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:42 GMT
	I1208 18:30:42.952640  429920 round_trippers.go:580]     Audit-Id: 52a27ff9-4158-4c35-903e-30baaaaf3219
	I1208 18:30:42.952648  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:42.952655  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:42.952663  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:42.952834  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:42.953285  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:43.450583  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:43.450604  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:43.450616  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:43.450622  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:43.452886  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:43.452908  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:43.452917  429920 round_trippers.go:580]     Audit-Id: ab0ff4cc-6d98-4230-b80b-73ab8f587fb7
	I1208 18:30:43.452924  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:43.452931  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:43.452938  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:43.452946  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:43.452954  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:43 GMT
	I1208 18:30:43.453081  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:43.950814  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:43.950839  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:43.950848  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:43.950854  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:43.953141  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:43.953171  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:43.953182  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:43.953192  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:43.953201  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:43.953209  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:43 GMT
	I1208 18:30:43.953215  429920 round_trippers.go:580]     Audit-Id: 643bf2b8-14e4-4b23-a162-615c77871de3
	I1208 18:30:43.953223  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:43.953357  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:44.450582  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:44.450611  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:44.450620  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:44.450629  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:44.452810  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:44.452837  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:44.452851  429920 round_trippers.go:580]     Audit-Id: 07c73d5e-dcb4-4326-9d25-edc8707a13ec
	I1208 18:30:44.452859  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:44.452867  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:44.452875  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:44.452882  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:44.452887  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:44 GMT
	I1208 18:30:44.453043  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:44.950503  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:44.950534  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:44.950547  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:44.950557  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:44.952832  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:44.952853  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:44.952866  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:44.952873  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:44 GMT
	I1208 18:30:44.952880  429920 round_trippers.go:580]     Audit-Id: d7aeab17-d577-4eb4-8ad6-1cc2947cc57a
	I1208 18:30:44.952887  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:44.952895  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:44.952913  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:44.953086  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:44.953430  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:45.450801  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:45.450824  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:45.450835  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:45.450844  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:45.453018  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:45.453047  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:45.453058  429920 round_trippers.go:580]     Audit-Id: 22537d63-8990-4175-aa7d-4032099f36c1
	I1208 18:30:45.453066  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:45.453075  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:45.453089  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:45.453098  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:45.453106  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:45 GMT
	I1208 18:30:45.453218  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:45.950945  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:45.950971  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:45.950979  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:45.950985  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:45.953039  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:45.953065  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:45.953076  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:45.953085  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:45.953094  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:45 GMT
	I1208 18:30:45.953103  429920 round_trippers.go:580]     Audit-Id: 0a82b6cb-3754-434a-9e39-94da32628fd0
	I1208 18:30:45.953115  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:45.953122  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:45.953284  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:46.449964  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:46.449990  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:46.449998  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:46.450004  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:46.452377  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:46.452407  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:46.452417  429920 round_trippers.go:580]     Audit-Id: 42465b70-ba40-4dff-8160-3f0a287a1833
	I1208 18:30:46.452423  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:46.452429  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:46.452435  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:46.452442  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:46.452454  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:46 GMT
	I1208 18:30:46.452583  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:46.950079  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:46.950106  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:46.950120  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:46.950130  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:46.952466  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:46.952492  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:46.952502  429920 round_trippers.go:580]     Audit-Id: a4559d20-19fe-4f35-87ee-cb305d41c1a1
	I1208 18:30:46.952512  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:46.952523  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:46.952535  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:46.952543  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:46.952555  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:46 GMT
	I1208 18:30:46.952697  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:47.450130  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:47.450180  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:47.450189  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:47.450195  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:47.452378  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:47.452396  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:47.452417  429920 round_trippers.go:580]     Audit-Id: 301a693e-46fc-49f5-aa2b-0bd0643cc9fd
	I1208 18:30:47.452423  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:47.452428  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:47.452433  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:47.452438  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:47.452443  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:47 GMT
	I1208 18:30:47.452600  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:47.452917  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:47.950555  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:47.950574  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:47.950582  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:47.950588  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:47.952165  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:30:47.952185  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:47.952192  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:47.952198  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:47.952203  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:47 GMT
	I1208 18:30:47.952208  429920 round_trippers.go:580]     Audit-Id: e9f7d650-1dea-40f6-a743-f6e21ac69117
	I1208 18:30:47.952213  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:47.952218  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:47.952365  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:48.450019  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:48.450058  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:48.450067  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:48.450074  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:48.452289  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:48.452317  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:48.452327  429920 round_trippers.go:580]     Audit-Id: 4eb2bce1-d4f0-4967-9510-9b8727d0c601
	I1208 18:30:48.452336  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:48.452345  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:48.452353  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:48.452361  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:48.452368  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:48 GMT
	I1208 18:30:48.452476  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:48.949949  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:48.949972  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:48.949981  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:48.949987  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:48.952162  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:48.952185  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:48.952195  429920 round_trippers.go:580]     Audit-Id: 863ba234-2487-4e07-99d2-4bff1335ead7
	I1208 18:30:48.952203  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:48.952209  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:48.952217  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:48.952224  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:48.952238  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:48 GMT
	I1208 18:30:48.952381  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:49.449980  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:49.450011  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:49.450021  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:49.450027  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:49.452265  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:49.452290  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:49.452299  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:49.452307  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:49 GMT
	I1208 18:30:49.452314  429920 round_trippers.go:580]     Audit-Id: fca1fec2-5038-442e-bd5b-a87bd68e5ff4
	I1208 18:30:49.452321  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:49.452336  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:49.452353  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:49.452445  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:49.950112  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:49.950141  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:49.950154  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:49.950164  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:49.952481  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:49.952500  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:49.952507  429920 round_trippers.go:580]     Audit-Id: 833571dd-244d-4085-8b5b-125e725629c9
	I1208 18:30:49.952513  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:49.952518  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:49.952522  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:49.952528  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:49.952533  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:49 GMT
	I1208 18:30:49.952697  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:49.953017  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:50.450330  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:50.450354  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:50.450362  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:50.450368  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:50.452700  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:50.452724  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:50.452733  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:50 GMT
	I1208 18:30:50.452740  429920 round_trippers.go:580]     Audit-Id: 5db4fbaf-f17c-4e94-b15c-26ad8fafaf8b
	I1208 18:30:50.452747  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:50.452755  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:50.452763  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:50.452772  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:50.452892  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:50.950649  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:50.950677  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:50.950685  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:50.950692  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:50.953125  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:50.953149  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:50.953158  429920 round_trippers.go:580]     Audit-Id: 3ddd7d0b-9f4c-4712-b5a8-55aa27d9ff3d
	I1208 18:30:50.953165  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:50.953173  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:50.953180  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:50.953189  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:50.953199  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:50 GMT
	I1208 18:30:50.953338  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:51.449907  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:51.449932  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:51.449940  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:51.449946  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:51.452396  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:51.452423  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:51.452441  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:51.452450  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:51 GMT
	I1208 18:30:51.452479  429920 round_trippers.go:580]     Audit-Id: 565483e7-4971-44f5-8ae3-81330c7a65d0
	I1208 18:30:51.452492  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:51.452499  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:51.452507  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:51.452638  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:51.950564  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:51.950584  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:51.950592  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:51.950599  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:51.952939  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:51.952965  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:51.952976  429920 round_trippers.go:580]     Audit-Id: 6d7f5ba1-cb77-49a8-89ea-193f0bd90624
	I1208 18:30:51.952985  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:51.952993  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:51.953001  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:51.953011  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:51.953023  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:51 GMT
	I1208 18:30:51.953199  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:51.953572  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:52.450805  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:52.450825  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:52.450833  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:52.450839  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:52.453149  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:52.453179  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:52.453189  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:52.453199  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:52.453208  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:52.453217  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:52 GMT
	I1208 18:30:52.453223  429920 round_trippers.go:580]     Audit-Id: f912ffa5-1075-4870-a969-401a2f224e7b
	I1208 18:30:52.453231  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:52.453346  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:52.950128  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:52.950160  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:52.950169  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:52.950176  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:52.952596  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:52.952615  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:52.952622  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:52.952628  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:52.952633  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:52.952638  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:52 GMT
	I1208 18:30:52.952647  429920 round_trippers.go:580]     Audit-Id: f8cb9973-87f5-4719-a8f2-9317f113097f
	I1208 18:30:52.952652  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:52.952797  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:53.450590  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:53.450618  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:53.450629  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:53.450640  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:53.452821  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:53.452845  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:53.452856  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:53.452866  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:53.452875  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:53.452886  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:53 GMT
	I1208 18:30:53.452892  429920 round_trippers.go:580]     Audit-Id: d7b6b8ea-5451-4d4b-afd6-e2df5e568012
	I1208 18:30:53.452899  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:53.453010  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:53.950615  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:53.950642  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:53.950650  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:53.950657  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:53.952999  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:53.953027  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:53.953039  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:53.953048  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:53.953056  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:53.953064  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:53 GMT
	I1208 18:30:53.953072  429920 round_trippers.go:580]     Audit-Id: 083607bf-e0e2-4b30-8747-dd6a10ea3fc4
	I1208 18:30:53.953083  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:53.953287  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:53.953667  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:54.449929  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:54.449981  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:54.449995  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:54.450008  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:54.452254  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:54.452277  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:54.452287  429920 round_trippers.go:580]     Audit-Id: 9e896bfc-f805-4fbc-9b7c-948e2f8cc8ba
	I1208 18:30:54.452295  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:54.452311  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:54.452323  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:54.452332  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:54.452340  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:54 GMT
	I1208 18:30:54.452478  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:54.950050  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:54.950074  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:54.950082  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:54.950089  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:54.952470  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:54.952491  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:54.952498  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:54 GMT
	I1208 18:30:54.952505  429920 round_trippers.go:580]     Audit-Id: c28c218a-357f-4179-9697-bc9269bfbc0f
	I1208 18:30:54.952510  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:54.952518  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:54.952526  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:54.952537  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:54.952708  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:55.450343  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:55.450367  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:55.450375  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:55.450381  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:55.452648  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:55.452672  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:55.452682  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:55.452692  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:55.452701  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:55.452713  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:55 GMT
	I1208 18:30:55.452723  429920 round_trippers.go:580]     Audit-Id: 2d7e01b7-0e74-4868-bc51-24284e6a69f2
	I1208 18:30:55.452732  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:55.452917  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:55.950564  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:55.950591  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:55.950602  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:55.950613  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:55.952875  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:55.952897  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:55.952904  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:55.952910  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:55 GMT
	I1208 18:30:55.952915  429920 round_trippers.go:580]     Audit-Id: 28a581ed-f4c5-4fcb-8de3-b6e6a43b1108
	I1208 18:30:55.952923  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:55.952942  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:55.952961  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:55.953096  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:56.450753  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:56.450789  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:56.450801  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:56.450810  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:56.453088  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:56.453110  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:56.453117  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:56.453123  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:56.453128  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:56.453134  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:56.453139  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:56 GMT
	I1208 18:30:56.453144  429920 round_trippers.go:580]     Audit-Id: 0be593fc-22bd-45c5-952d-194556549bf6
	I1208 18:30:56.453281  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:56.453612  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:56.949890  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:56.949911  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:56.949919  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:56.949926  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:56.952314  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:56.952342  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:56.952353  429920 round_trippers.go:580]     Audit-Id: 6486c78b-cd44-49ce-8e4a-7ba32de29ce2
	I1208 18:30:56.952362  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:56.952371  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:56.952380  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:56.952392  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:56.952401  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:56 GMT
	I1208 18:30:56.952574  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:57.450192  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:57.450228  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:57.450237  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:57.450243  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:57.452502  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:57.452526  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:57.452535  429920 round_trippers.go:580]     Audit-Id: 0a9c4749-a655-4c53-968c-6bbd44065ef6
	I1208 18:30:57.452543  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:57.452550  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:57.452558  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:57.452566  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:57.452576  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:57 GMT
	I1208 18:30:57.452702  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:57.949867  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:57.949895  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:57.949910  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:57.949917  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:57.951929  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:30:57.951953  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:57.951963  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:57.951969  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:57 GMT
	I1208 18:30:57.951974  429920 round_trippers.go:580]     Audit-Id: 7f2192e6-1711-4403-8bb0-ac51db534cb5
	I1208 18:30:57.951979  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:57.951984  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:57.951990  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:57.952122  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:58.450818  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:58.450843  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:58.450852  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:58.450865  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:58.453297  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:58.453319  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:58.453328  429920 round_trippers.go:580]     Audit-Id: 1f656371-9c59-4879-883c-e4ba32ea2ce9
	I1208 18:30:58.453335  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:58.453343  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:58.453351  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:58.453360  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:58.453373  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:58 GMT
	I1208 18:30:58.453534  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:58.453863  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:30:58.950166  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:58.950192  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:58.950203  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:58.950213  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:58.952632  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:58.952654  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:58.952661  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:58.952667  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:58.952672  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:58.952677  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:58 GMT
	I1208 18:30:58.952684  429920 round_trippers.go:580]     Audit-Id: e483227f-2f62-46e5-9e30-ffc32f540d6e
	I1208 18:30:58.952691  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:58.952823  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:59.450512  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:59.450543  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:59.450555  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:59.450564  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:59.452743  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:59.452772  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:59.452780  429920 round_trippers.go:580]     Audit-Id: db5c17e1-d85c-4966-ac9e-a612de307c19
	I1208 18:30:59.452786  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:59.452791  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:59.452796  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:59.452801  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:59.452813  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:59 GMT
	I1208 18:30:59.452933  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:30:59.950635  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:30:59.950667  429920 round_trippers.go:469] Request Headers:
	I1208 18:30:59.950681  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:30:59.950691  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:30:59.952795  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:30:59.952816  429920 round_trippers.go:577] Response Headers:
	I1208 18:30:59.952822  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:30:59.952828  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:30:59.952833  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:30:59.952839  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:30:59.952844  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:30:59 GMT
	I1208 18:30:59.952849  429920 round_trippers.go:580]     Audit-Id: b3036c4a-7c44-454b-a7f0-d79b3e6b384d
	I1208 18:30:59.953032  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:00.450398  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:00.450426  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:00.450435  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:00.450441  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:00.452795  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:00.452814  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:00.452821  429920 round_trippers.go:580]     Audit-Id: 439dc4a8-9bd0-4cc3-8e7b-8166814c1eaa
	I1208 18:31:00.452827  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:00.452832  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:00.452837  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:00.452843  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:00.452850  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:00 GMT
	I1208 18:31:00.453007  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:00.950617  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:00.950641  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:00.950649  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:00.950655  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:00.952896  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:00.952928  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:00.952935  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:00.952941  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:00 GMT
	I1208 18:31:00.952946  429920 round_trippers.go:580]     Audit-Id: 246d130a-4f65-42f4-9ace-e433ab08af8b
	I1208 18:31:00.952951  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:00.952969  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:00.952977  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:00.953202  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:00.953646  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:31:01.450796  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:01.450817  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:01.450825  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:01.450831  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:01.452980  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:01.452997  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:01.453003  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:01.453013  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:01 GMT
	I1208 18:31:01.453018  429920 round_trippers.go:580]     Audit-Id: 8c6807e9-60f7-44e6-9046-b14a2066c134
	I1208 18:31:01.453023  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:01.453028  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:01.453033  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:01.453150  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:01.949905  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:01.949934  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:01.949946  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:01.949955  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:01.952431  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:01.952458  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:01.952466  429920 round_trippers.go:580]     Audit-Id: e7b683d3-838e-431c-b3ff-6e5d4716c707
	I1208 18:31:01.952471  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:01.952476  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:01.952481  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:01.952486  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:01.952492  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:01 GMT
	I1208 18:31:01.952652  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:02.450294  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:02.450321  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:02.450329  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:02.450336  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:02.452628  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:02.452648  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:02.452655  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:02.452661  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:02 GMT
	I1208 18:31:02.452666  429920 round_trippers.go:580]     Audit-Id: 8252d582-87f4-4559-996c-f372a5cc6821
	I1208 18:31:02.452678  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:02.452685  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:02.452696  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:02.452844  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:02.950755  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:02.950780  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:02.950789  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:02.950795  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:02.953327  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:02.953354  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:02.953365  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:02 GMT
	I1208 18:31:02.953374  429920 round_trippers.go:580]     Audit-Id: bdaa4437-52c0-4f03-9e05-f652589c68c6
	I1208 18:31:02.953383  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:02.953392  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:02.953398  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:02.953405  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:02.953530  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:02.953929  429920 node_ready.go:58] node "multinode-985452" has status "Ready":"False"
	I1208 18:31:03.450078  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:03.450098  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.450106  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:03.450114  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.452597  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:03.452617  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:03.452625  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:03.452631  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:03.452639  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:03.452647  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:03 GMT
	I1208 18:31:03.452654  429920 round_trippers.go:580]     Audit-Id: 71309aa6-4478-43ce-a9c3-315cca37e041
	I1208 18:31:03.452661  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:03.452796  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"355","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6250 chars]
	I1208 18:31:03.950069  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:03.950102  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.950110  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.950123  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:03.952830  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:03.952944  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:03.952958  429920 round_trippers.go:580]     Audit-Id: a2977bdd-5987-42ad-a168-d65c25f2c25d
	I1208 18:31:03.952971  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:03.952984  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:03.952996  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:03.953008  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:03.953021  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:03 GMT
	I1208 18:31:03.953197  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:03.953631  429920 node_ready.go:49] node "multinode-985452" has status "Ready":"True"
	I1208 18:31:03.953661  429920 node_ready.go:38] duration metric: took 32.010144042s waiting for node "multinode-985452" to be "Ready" ...
	I1208 18:31:03.953675  429920 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:31:03.953782  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:03.953794  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.953806  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.953820  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:03.957770  429920 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1208 18:31:03.957792  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:03.957801  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:03.957809  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:03.957818  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:03.957828  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:03 GMT
	I1208 18:31:03.957840  429920 round_trippers.go:580]     Audit-Id: a12b3cae-22cd-44df-bb9a-a63eecc8e288
	I1208 18:31:03.957850  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:03.958536  429920 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"431","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1208 18:31:03.962222  429920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:03.962297  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q28mc
	I1208 18:31:03.962306  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.962313  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.962320  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:03.964511  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:03.964531  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:03.964541  429920 round_trippers.go:580]     Audit-Id: 5e8ee07c-426d-42e3-844f-65d4ce33d2ee
	I1208 18:31:03.964549  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:03.964557  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:03.964566  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:03.964574  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:03.964585  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:03 GMT
	I1208 18:31:03.964667  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"431","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1208 18:31:03.965188  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:03.965202  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.965211  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.965219  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:03.969804  429920 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1208 18:31:03.969825  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:03.969835  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:03.969845  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:03 GMT
	I1208 18:31:03.969853  429920 round_trippers.go:580]     Audit-Id: 7363ad4c-0af2-4ca4-86bc-e0be66b978ad
	I1208 18:31:03.969885  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:03.969902  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:03.969918  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:03.970507  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:03.970870  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q28mc
	I1208 18:31:03.970877  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:03.970885  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:03.970891  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:04.020944  429920 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1208 18:31:04.020970  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:04.020980  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:04.020990  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:04 GMT
	I1208 18:31:04.020997  429920 round_trippers.go:580]     Audit-Id: ab8984da-880e-4d64-8009-9bbaf0a102e2
	I1208 18:31:04.021006  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:04.021015  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:04.021025  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:04.021140  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"431","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1208 18:31:04.021631  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:04.021645  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:04.021654  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:04.021660  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:04.023439  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:04.023456  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:04.023463  429920 round_trippers.go:580]     Audit-Id: fd955363-4d3b-4f65-b54b-840b34aba9f1
	I1208 18:31:04.023468  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:04.023474  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:04.023479  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:04.023484  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:04.023489  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:04 GMT
	I1208 18:31:04.023592  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:04.524693  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q28mc
	I1208 18:31:04.524719  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:04.524727  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:04.524733  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:04.527204  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:04.527236  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:04.527247  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:04 GMT
	I1208 18:31:04.527255  429920 round_trippers.go:580]     Audit-Id: 8c74b06b-92a1-4ca1-8658-f43ee425eca5
	I1208 18:31:04.527261  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:04.527269  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:04.527277  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:04.527286  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:04.527394  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"431","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1208 18:31:04.527887  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:04.527905  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:04.527915  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:04.527924  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:04.529899  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:04.529921  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:04.529931  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:04.529939  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:04.529950  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:04 GMT
	I1208 18:31:04.529959  429920 round_trippers.go:580]     Audit-Id: e5b02d93-130a-4aae-bfa1-9db4a5eb2b7b
	I1208 18:31:04.529969  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:04.529979  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:04.530100  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.024811  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q28mc
	I1208 18:31:05.024838  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.024847  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.024853  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.027371  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.027398  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.027409  429920 round_trippers.go:580]     Audit-Id: f25e2884-5c87-42ce-a863-b035b6667fa6
	I1208 18:31:05.027417  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.027424  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.027431  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.027438  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.027446  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.027589  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"441","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1208 18:31:05.028226  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.028246  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.028257  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.028266  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.030208  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.030228  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.030236  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.030241  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.030247  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.030255  429920 round_trippers.go:580]     Audit-Id: 883b953e-462a-465f-8d71-038c4a100f77
	I1208 18:31:05.030263  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.030275  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.030423  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.030741  429920 pod_ready.go:92] pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.030759  429920 pod_ready.go:81] duration metric: took 1.068512992s waiting for pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.030768  429920 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.030816  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985452
	I1208 18:31:05.030823  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.030830  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.030836  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.032510  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.032530  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.032537  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.032543  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.032550  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.032555  429920 round_trippers.go:580]     Audit-Id: 4e460213-4a02-4ba0-a0e4-ed5b1a380b31
	I1208 18:31:05.032560  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.032566  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.032669  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985452","namespace":"kube-system","uid":"f7cc6b87-daec-4ba9-ad97-ae70c35b2022","resourceVersion":"311","creationTimestamp":"2023-12-08T18:30:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8903c2d6c0760f4e3d39229b9b6a1b8b","kubernetes.io/config.mirror":"8903c2d6c0760f4e3d39229b9b6a1b8b","kubernetes.io/config.seen":"2023-12-08T18:30:13.605165626Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1208 18:31:05.032987  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.032997  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.033004  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.033010  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.034546  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.034562  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.034568  429920 round_trippers.go:580]     Audit-Id: 5d6b01b9-101f-4dc2-ae1c-4cab9a6534ef
	I1208 18:31:05.034576  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.034584  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.034603  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.034613  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.034619  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.034791  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.035088  429920 pod_ready.go:92] pod "etcd-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.035102  429920 pod_ready.go:81] duration metric: took 4.328354ms waiting for pod "etcd-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.035112  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.035157  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985452
	I1208 18:31:05.035164  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.035171  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.035177  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.036718  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.036736  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.036745  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.036753  429920 round_trippers.go:580]     Audit-Id: fadd1cfb-f415-487d-b977-44ca3963d3c1
	I1208 18:31:05.036761  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.036768  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.036776  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.036800  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.036927  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985452","namespace":"kube-system","uid":"4453075e-130b-4948-ba80-8df11bbde032","resourceVersion":"314","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"1d9f2c65662e1a2b368034f468091c6f","kubernetes.io/config.mirror":"1d9f2c65662e1a2b368034f468091c6f","kubernetes.io/config.seen":"2023-12-08T18:30:19.727611796Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1208 18:31:05.037290  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.037304  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.037310  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.037316  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.038889  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.038907  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.038914  429920 round_trippers.go:580]     Audit-Id: 190550c3-36f7-40d6-940c-e80edebf2594
	I1208 18:31:05.038920  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.038926  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.038934  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.038944  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.038958  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.039144  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.039444  429920 pod_ready.go:92] pod "kube-apiserver-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.039465  429920 pod_ready.go:81] duration metric: took 4.343525ms waiting for pod "kube-apiserver-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.039476  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.039524  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985452
	I1208 18:31:05.039533  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.039540  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.039545  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.041028  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:05.041045  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.041051  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.041057  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.041063  429920 round_trippers.go:580]     Audit-Id: 04c1a765-1f69-498a-9398-42e1f25265a7
	I1208 18:31:05.041072  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.041083  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.041091  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.041211  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985452","namespace":"kube-system","uid":"4567aff3-4497-4c0b-a563-789999efb852","resourceVersion":"319","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"882fad5af594d8fc4750809e9dfef444","kubernetes.io/config.mirror":"882fad5af594d8fc4750809e9dfef444","kubernetes.io/config.seen":"2023-12-08T18:30:19.727613270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1208 18:31:05.150885  429920 request.go:629] Waited for 109.282353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.150945  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.150953  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.150961  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.150975  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.153119  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.153141  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.153149  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.153157  429920 round_trippers.go:580]     Audit-Id: 61f813de-089b-4af5-94b2-b2e2e85c4064
	I1208 18:31:05.153166  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.153173  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.153179  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.153187  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.153350  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.153662  429920 pod_ready.go:92] pod "kube-controller-manager-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.153677  429920 pod_ready.go:81] duration metric: took 114.192224ms waiting for pod "kube-controller-manager-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.153691  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wf8gr" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.351136  429920 request.go:629] Waited for 197.353159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf8gr
	I1208 18:31:05.351203  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf8gr
	I1208 18:31:05.351208  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.351216  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.351284  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.353723  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.353743  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.353750  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.353756  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.353766  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.353774  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.353794  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.353805  429920 round_trippers.go:580]     Audit-Id: efd297da-1180-4332-8b4e-6e48db9ebdfe
	I1208 18:31:05.353937  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf8gr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f5b56b5d-7c2d-4dd2-8152-59d68bf94428","resourceVersion":"410","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3d5b21b6-2ec5-4510-b9cc-91174bf753f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5b21b6-2ec5-4510-b9cc-91174bf753f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1208 18:31:05.550744  429920 request.go:629] Waited for 196.345676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.550808  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.550813  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.550820  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.550827  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.553178  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.553201  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.553211  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.553219  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.553227  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.553234  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.553242  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.553253  429920 round_trippers.go:580]     Audit-Id: 21299f79-47f4-4cd3-842f-836543bae740
	I1208 18:31:05.553373  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.553729  429920 pod_ready.go:92] pod "kube-proxy-wf8gr" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.553749  429920 pod_ready.go:81] duration metric: took 400.049657ms waiting for pod "kube-proxy-wf8gr" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.553758  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.750158  429920 request.go:629] Waited for 196.298177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985452
	I1208 18:31:05.750221  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985452
	I1208 18:31:05.750233  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.750244  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.750250  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.752760  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.752792  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.752803  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.752819  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.752830  429920 round_trippers.go:580]     Audit-Id: e4797750-7365-420b-a1f8-baa45a541403
	I1208 18:31:05.752841  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.752847  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.752855  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.752985  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985452","namespace":"kube-system","uid":"0e7e0dab-442a-4004-94ce-17e535110819","resourceVersion":"339","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"600c961249358e713a27cc452ad5a264","kubernetes.io/config.mirror":"600c961249358e713a27cc452ad5a264","kubernetes.io/config.seen":"2023-12-08T18:30:19.727614659Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1208 18:31:05.950814  429920 request.go:629] Waited for 197.425458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.950881  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:05.950886  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.950893  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.950899  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.953128  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:05.953151  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.953158  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.953163  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.953169  429920 round_trippers.go:580]     Audit-Id: 179f9c77-da1b-4cd4-b5ba-16f1d22ca2e4
	I1208 18:31:05.953176  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.953184  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.953191  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.953380  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 6067 chars]
	I1208 18:31:05.953775  429920 pod_ready.go:92] pod "kube-scheduler-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:05.953794  429920 pod_ready.go:81] duration metric: took 400.030251ms waiting for pod "kube-scheduler-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:05.953804  429920 pod_ready.go:38] duration metric: took 2.000095384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:31:05.953822  429920 api_server.go:52] waiting for apiserver process to appear ...
	I1208 18:31:05.953912  429920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 18:31:05.963635  429920 command_runner.go:130] > 1443
	I1208 18:31:05.964409  429920 api_server.go:72] duration metric: took 34.077097037s to wait for apiserver process to appear ...
	I1208 18:31:05.964432  429920 api_server.go:88] waiting for apiserver healthz status ...
	I1208 18:31:05.964451  429920 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1208 18:31:05.968714  429920 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1208 18:31:05.968786  429920 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1208 18:31:05.968796  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:05.968804  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:05.968812  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:05.969753  429920 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1208 18:31:05.969767  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:05.969776  429920 round_trippers.go:580]     Audit-Id: 0cc17e68-d05b-4f05-88c4-f8f07039332e
	I1208 18:31:05.969785  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:05.969794  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:05.969804  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:05.969815  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:05.969823  429920 round_trippers.go:580]     Content-Length: 264
	I1208 18:31:05.969837  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:05 GMT
	I1208 18:31:05.969858  429920 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1208 18:31:05.969950  429920 api_server.go:141] control plane version: v1.28.4
	I1208 18:31:05.969972  429920 api_server.go:131] duration metric: took 5.532771ms to wait for apiserver health ...
	I1208 18:31:05.969980  429920 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 18:31:06.150426  429920 request.go:629] Waited for 180.345674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:06.150507  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:06.150516  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:06.150524  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:06.150530  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:06.153646  429920 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1208 18:31:06.153666  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:06.153678  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:06.153687  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:06.153696  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:06.153708  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:06 GMT
	I1208 18:31:06.153714  429920 round_trippers.go:580]     Audit-Id: 55a8cd56-9888-434e-949e-d5fe245484e8
	I1208 18:31:06.153721  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:06.154130  429920 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"441","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1208 18:31:06.156072  429920 system_pods.go:59] 8 kube-system pods found
	I1208 18:31:06.156106  429920 system_pods.go:61] "coredns-5dd5756b68-q28mc" [79df6371-4a56-4034-8e15-947b595ac5bb] Running
	I1208 18:31:06.156113  429920 system_pods.go:61] "etcd-multinode-985452" [f7cc6b87-daec-4ba9-ad97-ae70c35b2022] Running
	I1208 18:31:06.156117  429920 system_pods.go:61] "kindnet-nfbjn" [1def7bb5-ed1e-47af-b6ba-4f4df25b5988] Running
	I1208 18:31:06.156124  429920 system_pods.go:61] "kube-apiserver-multinode-985452" [4453075e-130b-4948-ba80-8df11bbde032] Running
	I1208 18:31:06.156129  429920 system_pods.go:61] "kube-controller-manager-multinode-985452" [4567aff3-4497-4c0b-a563-789999efb852] Running
	I1208 18:31:06.156134  429920 system_pods.go:61] "kube-proxy-wf8gr" [f5b56b5d-7c2d-4dd2-8152-59d68bf94428] Running
	I1208 18:31:06.156138  429920 system_pods.go:61] "kube-scheduler-multinode-985452" [0e7e0dab-442a-4004-94ce-17e535110819] Running
	I1208 18:31:06.156146  429920 system_pods.go:61] "storage-provisioner" [1eedf4a2-904b-41c1-997e-28f766fcddf3] Running
	I1208 18:31:06.156151  429920 system_pods.go:74] duration metric: took 186.163364ms to wait for pod list to return data ...
	I1208 18:31:06.156160  429920 default_sa.go:34] waiting for default service account to be created ...
	I1208 18:31:06.350534  429920 request.go:629] Waited for 194.300336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1208 18:31:06.350620  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1208 18:31:06.350632  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:06.350645  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:06.350658  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:06.352872  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:06.352892  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:06.352898  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:06.352907  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:06.352915  429920 round_trippers.go:580]     Content-Length: 261
	I1208 18:31:06.352924  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:06 GMT
	I1208 18:31:06.352936  429920 round_trippers.go:580]     Audit-Id: 825157a8-8dd5-402f-a406-383176ba0f9c
	I1208 18:31:06.352950  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:06.352957  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:06.352977  429920 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a6315c57-e06c-4852-b573-36b22b69689d","resourceVersion":"330","creationTimestamp":"2023-12-08T18:30:31Z"}}]}
	I1208 18:31:06.353192  429920 default_sa.go:45] found service account: "default"
	I1208 18:31:06.353215  429920 default_sa.go:55] duration metric: took 197.046673ms for default service account to be created ...
	I1208 18:31:06.353225  429920 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 18:31:06.550676  429920 request.go:629] Waited for 197.369003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:06.550758  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:06.550774  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:06.550786  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:06.550796  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:06.554029  429920 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1208 18:31:06.554060  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:06.554070  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:06.554079  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:06 GMT
	I1208 18:31:06.554088  429920 round_trippers.go:580]     Audit-Id: f38dc694-5b4a-4052-990d-1f135480bb09
	I1208 18:31:06.554098  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:06.554111  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:06.554124  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:06.554746  429920 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"441","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1208 18:31:06.556522  429920 system_pods.go:86] 8 kube-system pods found
	I1208 18:31:06.556547  429920 system_pods.go:89] "coredns-5dd5756b68-q28mc" [79df6371-4a56-4034-8e15-947b595ac5bb] Running
	I1208 18:31:06.556554  429920 system_pods.go:89] "etcd-multinode-985452" [f7cc6b87-daec-4ba9-ad97-ae70c35b2022] Running
	I1208 18:31:06.556560  429920 system_pods.go:89] "kindnet-nfbjn" [1def7bb5-ed1e-47af-b6ba-4f4df25b5988] Running
	I1208 18:31:06.556566  429920 system_pods.go:89] "kube-apiserver-multinode-985452" [4453075e-130b-4948-ba80-8df11bbde032] Running
	I1208 18:31:06.556577  429920 system_pods.go:89] "kube-controller-manager-multinode-985452" [4567aff3-4497-4c0b-a563-789999efb852] Running
	I1208 18:31:06.556587  429920 system_pods.go:89] "kube-proxy-wf8gr" [f5b56b5d-7c2d-4dd2-8152-59d68bf94428] Running
	I1208 18:31:06.556595  429920 system_pods.go:89] "kube-scheduler-multinode-985452" [0e7e0dab-442a-4004-94ce-17e535110819] Running
	I1208 18:31:06.556607  429920 system_pods.go:89] "storage-provisioner" [1eedf4a2-904b-41c1-997e-28f766fcddf3] Running
	I1208 18:31:06.556616  429920 system_pods.go:126] duration metric: took 203.377892ms to wait for k8s-apps to be running ...
	I1208 18:31:06.556631  429920 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 18:31:06.556684  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:31:06.566962  429920 system_svc.go:56] duration metric: took 10.328515ms WaitForService to wait for kubelet.
	I1208 18:31:06.566984  429920 kubeadm.go:581] duration metric: took 34.679679261s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1208 18:31:06.567002  429920 node_conditions.go:102] verifying NodePressure condition ...
	I1208 18:31:06.750427  429920 request.go:629] Waited for 183.330188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1208 18:31:06.750518  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1208 18:31:06.750524  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:06.750532  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:06.750548  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:06.752836  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:06.752863  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:06.752873  429920 round_trippers.go:580]     Audit-Id: efafc17f-8ea7-4a38-866c-5ff3a22c9962
	I1208 18:31:06.752881  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:06.752889  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:06.752899  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:06.752919  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:06.752935  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:06 GMT
	I1208 18:31:06.753050  429920 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"425","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6120 chars]
	I1208 18:31:06.753424  429920 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:31:06.753443  429920 node_conditions.go:123] node cpu capacity is 8
	I1208 18:31:06.753453  429920 node_conditions.go:105] duration metric: took 186.447252ms to run NodePressure ...
	I1208 18:31:06.753465  429920 start.go:228] waiting for startup goroutines ...
	I1208 18:31:06.753474  429920 start.go:233] waiting for cluster config update ...
	I1208 18:31:06.753484  429920 start.go:242] writing updated cluster config ...
	I1208 18:31:06.755910  429920 out.go:177] 
	I1208 18:31:06.757479  429920 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:31:06.757555  429920 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json ...
	I1208 18:31:06.759383  429920 out.go:177] * Starting worker node multinode-985452-m02 in cluster multinode-985452
	I1208 18:31:06.760596  429920 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:31:06.761955  429920 out.go:177] * Pulling base image ...
	I1208 18:31:06.763778  429920 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:31:06.763795  429920 cache.go:56] Caching tarball of preloaded images
	I1208 18:31:06.763802  429920 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:31:06.763882  429920 preload.go:174] Found /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 18:31:06.763894  429920 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1208 18:31:06.763977  429920 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json ...
	I1208 18:31:06.779595  429920 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon, skipping pull
	I1208 18:31:06.779622  429920 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in daemon, skipping load
	I1208 18:31:06.779642  429920 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:31:06.779682  429920 start.go:365] acquiring machines lock for multinode-985452-m02: {Name:mk10e5b590ba89b3ca15c165b699d2ca8aa14e53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:31:06.779782  429920 start.go:369] acquired machines lock for "multinode-985452-m02" in 80.637µs
	I1208 18:31:06.779806  429920 start.go:93] Provisioning new machine with config: &{Name:multinode-985452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1208 18:31:06.779881  429920 start.go:125] createHost starting for "m02" (driver="docker")
	I1208 18:31:06.781898  429920 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1208 18:31:06.782007  429920 start.go:159] libmachine.API.Create for "multinode-985452" (driver="docker")
	I1208 18:31:06.782028  429920 client.go:168] LocalClient.Create starting
	I1208 18:31:06.782080  429920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem
	I1208 18:31:06.782110  429920 main.go:141] libmachine: Decoding PEM data...
	I1208 18:31:06.782125  429920 main.go:141] libmachine: Parsing certificate...
	I1208 18:31:06.782179  429920 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem
	I1208 18:31:06.782200  429920 main.go:141] libmachine: Decoding PEM data...
	I1208 18:31:06.782210  429920 main.go:141] libmachine: Parsing certificate...
	I1208 18:31:06.782393  429920 cli_runner.go:164] Run: docker network inspect multinode-985452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:31:06.797696  429920 network_create.go:77] Found existing network {name:multinode-985452 subnet:0xc0032958f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1208 18:31:06.797736  429920 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-985452-m02" container
	I1208 18:31:06.797791  429920 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 18:31:06.813780  429920 cli_runner.go:164] Run: docker volume create multinode-985452-m02 --label name.minikube.sigs.k8s.io=multinode-985452-m02 --label created_by.minikube.sigs.k8s.io=true
	I1208 18:31:06.831469  429920 oci.go:103] Successfully created a docker volume multinode-985452-m02
	I1208 18:31:06.831561  429920 cli_runner.go:164] Run: docker run --rm --name multinode-985452-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985452-m02 --entrypoint /usr/bin/test -v multinode-985452-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -d /var/lib
	I1208 18:31:07.322323  429920 oci.go:107] Successfully prepared a docker volume multinode-985452-m02
	I1208 18:31:07.322369  429920 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:31:07.322394  429920 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 18:31:07.322498  429920 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985452-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 18:31:12.434179  429920 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985452-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.11162559s)
	I1208 18:31:12.434220  429920 kic.go:203] duration metric: took 5.111823 seconds to extract preloaded images to volume
	W1208 18:31:12.434380  429920 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 18:31:12.434535  429920 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 18:31:12.485528  429920 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-985452-m02 --name multinode-985452-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985452-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-985452-m02 --network multinode-985452 --ip 192.168.58.3 --volume multinode-985452-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:31:12.791331  429920 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Running}}
	I1208 18:31:12.808684  429920 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Status}}
	I1208 18:31:12.826758  429920 cli_runner.go:164] Run: docker exec multinode-985452-m02 stat /var/lib/dpkg/alternatives/iptables
	I1208 18:31:12.875127  429920 oci.go:144] the created container "multinode-985452-m02" has a running status.
	I1208 18:31:12.875167  429920 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa...
	I1208 18:31:13.111633  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1208 18:31:13.111674  429920 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 18:31:13.134501  429920 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Status}}
	I1208 18:31:13.156205  429920 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 18:31:13.156244  429920 kic_runner.go:114] Args: [docker exec --privileged multinode-985452-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 18:31:13.232729  429920 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Status}}
	I1208 18:31:13.248981  429920 machine.go:88] provisioning docker machine ...
	I1208 18:31:13.249024  429920 ubuntu.go:169] provisioning hostname "multinode-985452-m02"
	I1208 18:31:13.249091  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:13.268879  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:31:13.269258  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1208 18:31:13.269277  429920 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985452-m02 && echo "multinode-985452-m02" | sudo tee /etc/hostname
	I1208 18:31:13.501386  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985452-m02
	
	I1208 18:31:13.501487  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:13.518971  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:31:13.519315  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1208 18:31:13.519343  429920 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985452-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985452-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985452-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:31:13.642571  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:31:13.642608  429920 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:31:13.642632  429920 ubuntu.go:177] setting up certificates
	I1208 18:31:13.642646  429920 provision.go:83] configureAuth start
	I1208 18:31:13.642723  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452-m02
	I1208 18:31:13.661864  429920 provision.go:138] copyHostCerts
	I1208 18:31:13.661904  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:31:13.661931  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem, removing ...
	I1208 18:31:13.661940  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:31:13.662001  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:31:13.662086  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:31:13.662103  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem, removing ...
	I1208 18:31:13.662108  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:31:13.662132  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:31:13.662186  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:31:13.662202  429920 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem, removing ...
	I1208 18:31:13.662206  429920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:31:13.662225  429920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:31:13.662279  429920 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.multinode-985452-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-985452-m02]
	I1208 18:31:13.748228  429920 provision.go:172] copyRemoteCerts
	I1208 18:31:13.748299  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:31:13.748335  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:13.764927  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:31:13.854822  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 18:31:13.854896  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:31:13.876478  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 18:31:13.876549  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1208 18:31:13.898737  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 18:31:13.898808  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 18:31:13.919990  429920 provision.go:86] duration metric: configureAuth took 277.324017ms
	I1208 18:31:13.920020  429920 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:31:13.920233  429920 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:31:13.920379  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:13.936990  429920 main.go:141] libmachine: Using SSH client type: native
	I1208 18:31:13.937429  429920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1208 18:31:13.937450  429920 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:31:14.150597  429920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:31:14.150630  429920 machine.go:91] provisioned docker machine in 901.625094ms
	I1208 18:31:14.150642  429920 client.go:171] LocalClient.Create took 7.368606127s
	I1208 18:31:14.150663  429920 start.go:167] duration metric: libmachine.API.Create for "multinode-985452" took 7.368655485s
	I1208 18:31:14.150675  429920 start.go:300] post-start starting for "multinode-985452-m02" (driver="docker")
	I1208 18:31:14.150690  429920 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:31:14.150759  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:31:14.150813  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:14.167664  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:31:14.259683  429920 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:31:14.262765  429920 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1208 18:31:14.262791  429920 command_runner.go:130] > NAME="Ubuntu"
	I1208 18:31:14.262797  429920 command_runner.go:130] > VERSION_ID="22.04"
	I1208 18:31:14.262803  429920 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1208 18:31:14.262817  429920 command_runner.go:130] > VERSION_CODENAME=jammy
	I1208 18:31:14.262821  429920 command_runner.go:130] > ID=ubuntu
	I1208 18:31:14.262825  429920 command_runner.go:130] > ID_LIKE=debian
	I1208 18:31:14.262830  429920 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1208 18:31:14.262841  429920 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1208 18:31:14.262855  429920 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1208 18:31:14.262869  429920 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1208 18:31:14.262879  429920 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1208 18:31:14.262957  429920 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:31:14.263001  429920 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:31:14.263017  429920 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:31:14.263031  429920 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1208 18:31:14.263046  429920 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:31:14.263114  429920 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:31:14.263203  429920 filesync.go:149] local asset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> 3436282.pem in /etc/ssl/certs
	I1208 18:31:14.263216  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /etc/ssl/certs/3436282.pem
	I1208 18:31:14.263321  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 18:31:14.271424  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:31:14.293679  429920 start.go:303] post-start completed in 142.983392ms
	I1208 18:31:14.294220  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452-m02
	I1208 18:31:14.310634  429920 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/config.json ...
	I1208 18:31:14.310883  429920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:31:14.310926  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:14.328637  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:31:14.415144  429920 command_runner.go:130] > 21%!
	(MISSING)I1208 18:31:14.415325  429920 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:31:14.419339  429920 command_runner.go:130] > 232G
	I1208 18:31:14.419559  429920 start.go:128] duration metric: createHost completed in 7.639662094s
	I1208 18:31:14.419583  429920 start.go:83] releasing machines lock for "multinode-985452-m02", held for 7.639789016s
	I1208 18:31:14.419647  429920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452-m02
	I1208 18:31:14.439133  429920 out.go:177] * Found network options:
	I1208 18:31:14.440647  429920 out.go:177]   - NO_PROXY=192.168.58.2
	W1208 18:31:14.442242  429920 proxy.go:119] fail to check proxy env: Error ip not in block
	W1208 18:31:14.442301  429920 proxy.go:119] fail to check proxy env: Error ip not in block
	I1208 18:31:14.442379  429920 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:31:14.442436  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:14.442436  429920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:31:14.442614  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:31:14.461223  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:31:14.461224  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:31:14.681545  429920 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 18:31:14.681708  429920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:31:14.685854  429920 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1208 18:31:14.685886  429920 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1208 18:31:14.685897  429920 command_runner.go:130] > Device: b0h/176d	Inode: 1299647     Links: 1
	I1208 18:31:14.685907  429920 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:31:14.685913  429920 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1208 18:31:14.685918  429920 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1208 18:31:14.685930  429920 command_runner.go:130] > Change: 2023-12-08 18:10:37.396658804 +0000
	I1208 18:31:14.685938  429920 command_runner.go:130] >  Birth: 2023-12-08 18:10:37.396658804 +0000
	I1208 18:31:14.686137  429920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:31:14.703854  429920 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:31:14.703940  429920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:31:14.730932  429920 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1208 18:31:14.731007  429920 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1208 18:31:14.731025  429920 start.go:475] detecting cgroup driver to use...
	I1208 18:31:14.731142  429920 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:31:14.731185  429920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:31:14.744997  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:31:14.754722  429920 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:31:14.754781  429920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:31:14.766851  429920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:31:14.779343  429920 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 18:31:14.858953  429920 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:31:14.943288  429920 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1208 18:31:14.943319  429920 docker.go:219] disabling docker service ...
	I1208 18:31:14.943359  429920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:31:14.961625  429920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:31:14.971997  429920 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:31:14.983265  429920 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1208 18:31:15.045539  429920 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:31:15.055910  429920 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1208 18:31:15.123256  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:31:15.133683  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:31:15.147771  429920 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1208 18:31:15.147820  429920 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1208 18:31:15.147876  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:31:15.156664  429920 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 18:31:15.156721  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:31:15.165518  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:31:15.173786  429920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:31:15.182203  429920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 18:31:15.190248  429920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 18:31:15.196552  429920 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 18:31:15.197143  429920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 18:31:15.204180  429920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 18:31:15.277390  429920 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 18:31:15.384791  429920 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 18:31:15.384868  429920 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 18:31:15.388415  429920 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1208 18:31:15.388443  429920 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 18:31:15.388453  429920 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1208 18:31:15.388476  429920 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:31:15.388491  429920 command_runner.go:130] > Access: 2023-12-08 18:31:15.370793064 +0000
	I1208 18:31:15.388502  429920 command_runner.go:130] > Modify: 2023-12-08 18:31:15.370793064 +0000
	I1208 18:31:15.388518  429920 command_runner.go:130] > Change: 2023-12-08 18:31:15.370793064 +0000
	I1208 18:31:15.388524  429920 command_runner.go:130] >  Birth: -
	I1208 18:31:15.388546  429920 start.go:543] Will wait 60s for crictl version
	I1208 18:31:15.388587  429920 ssh_runner.go:195] Run: which crictl
	I1208 18:31:15.391570  429920 command_runner.go:130] > /usr/bin/crictl
	I1208 18:31:15.391644  429920 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 18:31:15.421601  429920 command_runner.go:130] > Version:  0.1.0
	I1208 18:31:15.421628  429920 command_runner.go:130] > RuntimeName:  cri-o
	I1208 18:31:15.421636  429920 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1208 18:31:15.421644  429920 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 18:31:15.423961  429920 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1208 18:31:15.424055  429920 ssh_runner.go:195] Run: crio --version
	I1208 18:31:15.457063  429920 command_runner.go:130] > crio version 1.24.6
	I1208 18:31:15.457084  429920 command_runner.go:130] > Version:          1.24.6
	I1208 18:31:15.457091  429920 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1208 18:31:15.457095  429920 command_runner.go:130] > GitTreeState:     clean
	I1208 18:31:15.457101  429920 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1208 18:31:15.457105  429920 command_runner.go:130] > GoVersion:        go1.18.2
	I1208 18:31:15.457109  429920 command_runner.go:130] > Compiler:         gc
	I1208 18:31:15.457114  429920 command_runner.go:130] > Platform:         linux/amd64
	I1208 18:31:15.457119  429920 command_runner.go:130] > Linkmode:         dynamic
	I1208 18:31:15.457126  429920 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1208 18:31:15.457130  429920 command_runner.go:130] > SeccompEnabled:   true
	I1208 18:31:15.457134  429920 command_runner.go:130] > AppArmorEnabled:  false
	I1208 18:31:15.457199  429920 ssh_runner.go:195] Run: crio --version
	I1208 18:31:15.489103  429920 command_runner.go:130] > crio version 1.24.6
	I1208 18:31:15.489129  429920 command_runner.go:130] > Version:          1.24.6
	I1208 18:31:15.489141  429920 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1208 18:31:15.489150  429920 command_runner.go:130] > GitTreeState:     clean
	I1208 18:31:15.489161  429920 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1208 18:31:15.489175  429920 command_runner.go:130] > GoVersion:        go1.18.2
	I1208 18:31:15.489182  429920 command_runner.go:130] > Compiler:         gc
	I1208 18:31:15.489192  429920 command_runner.go:130] > Platform:         linux/amd64
	I1208 18:31:15.489209  429920 command_runner.go:130] > Linkmode:         dynamic
	I1208 18:31:15.489225  429920 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1208 18:31:15.489237  429920 command_runner.go:130] > SeccompEnabled:   true
	I1208 18:31:15.489249  429920 command_runner.go:130] > AppArmorEnabled:  false
	I1208 18:31:15.494327  429920 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1208 18:31:15.496040  429920 out.go:177]   - env NO_PROXY=192.168.58.2
	I1208 18:31:15.497532  429920 cli_runner.go:164] Run: docker network inspect multinode-985452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 18:31:15.514095  429920 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1208 18:31:15.517921  429920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:31:15.528268  429920 certs.go:56] Setting up /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452 for IP: 192.168.58.3
	I1208 18:31:15.528312  429920 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5abf3d3db90d2494e2d623a52fec5b2843f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:31:15.528461  429920 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key
	I1208 18:31:15.528497  429920 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key
	I1208 18:31:15.528511  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 18:31:15.528525  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 18:31:15.528537  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 18:31:15.528549  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 18:31:15.528594  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem (1338 bytes)
	W1208 18:31:15.528624  429920 certs.go:433] ignoring /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628_empty.pem, impossibly tiny 0 bytes
	I1208 18:31:15.528634  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 18:31:15.528661  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem (1082 bytes)
	I1208 18:31:15.528691  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem (1123 bytes)
	I1208 18:31:15.528722  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem (1679 bytes)
	I1208 18:31:15.528827  429920 certs.go:437] found cert: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:31:15.528863  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> /usr/share/ca-certificates/3436282.pem
	I1208 18:31:15.528877  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:31:15.528889  429920 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem -> /usr/share/ca-certificates/343628.pem
	I1208 18:31:15.529288  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 18:31:15.550526  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 18:31:15.572181  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 18:31:15.594052  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 18:31:15.615617  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /usr/share/ca-certificates/3436282.pem (1708 bytes)
	I1208 18:31:15.638519  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 18:31:15.659887  429920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/343628.pem --> /usr/share/ca-certificates/343628.pem (1338 bytes)
	I1208 18:31:15.681508  429920 ssh_runner.go:195] Run: openssl version
	I1208 18:31:15.686759  429920 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1208 18:31:15.686847  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3436282.pem && ln -fs /usr/share/ca-certificates/3436282.pem /etc/ssl/certs/3436282.pem"
	I1208 18:31:15.696017  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3436282.pem
	I1208 18:31:15.699306  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 18:17 /usr/share/ca-certificates/3436282.pem
	I1208 18:31:15.699341  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  8 18:17 /usr/share/ca-certificates/3436282.pem
	I1208 18:31:15.699390  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3436282.pem
	I1208 18:31:15.705304  429920 command_runner.go:130] > 3ec20f2e
	I1208 18:31:15.705534  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3436282.pem /etc/ssl/certs/3ec20f2e.0"
	I1208 18:31:15.714302  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1208 18:31:15.724016  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:31:15.727515  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:31:15.727580  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  8 18:11 /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:31:15.727630  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 18:31:15.734527  429920 command_runner.go:130] > b5213941
	I1208 18:31:15.734605  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1208 18:31:15.743998  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/343628.pem && ln -fs /usr/share/ca-certificates/343628.pem /etc/ssl/certs/343628.pem"
	I1208 18:31:15.752794  429920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/343628.pem
	I1208 18:31:15.755953  429920 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 18:17 /usr/share/ca-certificates/343628.pem
	I1208 18:31:15.756005  429920 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  8 18:17 /usr/share/ca-certificates/343628.pem
	I1208 18:31:15.756054  429920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/343628.pem
	I1208 18:31:15.762143  429920 command_runner.go:130] > 51391683
	I1208 18:31:15.762342  429920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/343628.pem /etc/ssl/certs/51391683.0"
	I1208 18:31:15.770802  429920 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1208 18:31:15.773889  429920 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:31:15.773944  429920 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1208 18:31:15.774043  429920 ssh_runner.go:195] Run: crio config
	I1208 18:31:15.809980  429920 command_runner.go:130] ! time="2023-12-08 18:31:15.809544579Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1208 18:31:15.810017  429920 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1208 18:31:15.815994  429920 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1208 18:31:15.816023  429920 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1208 18:31:15.816031  429920 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1208 18:31:15.816037  429920 command_runner.go:130] > #
	I1208 18:31:15.816048  429920 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1208 18:31:15.816059  429920 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1208 18:31:15.816069  429920 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1208 18:31:15.816087  429920 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1208 18:31:15.816105  429920 command_runner.go:130] > # reload'.
	I1208 18:31:15.816117  429920 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1208 18:31:15.816129  429920 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1208 18:31:15.816143  429920 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1208 18:31:15.816156  429920 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1208 18:31:15.816166  429920 command_runner.go:130] > [crio]
	I1208 18:31:15.816176  429920 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1208 18:31:15.816188  429920 command_runner.go:130] > # containers images, in this directory.
	I1208 18:31:15.816202  429920 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1208 18:31:15.816215  429920 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1208 18:31:15.816228  429920 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1208 18:31:15.816242  429920 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1208 18:31:15.816255  429920 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1208 18:31:15.816266  429920 command_runner.go:130] > # storage_driver = "vfs"
	I1208 18:31:15.816276  429920 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1208 18:31:15.816287  429920 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1208 18:31:15.816294  429920 command_runner.go:130] > # storage_option = [
	I1208 18:31:15.816299  429920 command_runner.go:130] > # ]
	I1208 18:31:15.816313  429920 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1208 18:31:15.816327  429920 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1208 18:31:15.816339  429920 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1208 18:31:15.816351  429920 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1208 18:31:15.816364  429920 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1208 18:31:15.816374  429920 command_runner.go:130] > # always happen on a node reboot
	I1208 18:31:15.816382  429920 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1208 18:31:15.816398  429920 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1208 18:31:15.816411  429920 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1208 18:31:15.816428  429920 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1208 18:31:15.816440  429920 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1208 18:31:15.816456  429920 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1208 18:31:15.816470  429920 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1208 18:31:15.816477  429920 command_runner.go:130] > # internal_wipe = true
	I1208 18:31:15.816487  429920 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1208 18:31:15.816501  429920 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1208 18:31:15.816514  429920 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1208 18:31:15.816525  429920 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1208 18:31:15.816538  429920 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1208 18:31:15.816548  429920 command_runner.go:130] > [crio.api]
	I1208 18:31:15.816556  429920 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1208 18:31:15.816563  429920 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1208 18:31:15.816571  429920 command_runner.go:130] > # IP address on which the stream server will listen.
	I1208 18:31:15.816582  429920 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1208 18:31:15.816596  429920 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1208 18:31:15.816608  429920 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1208 18:31:15.816618  429920 command_runner.go:130] > # stream_port = "0"
	I1208 18:31:15.816630  429920 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1208 18:31:15.816640  429920 command_runner.go:130] > # stream_enable_tls = false
	I1208 18:31:15.816647  429920 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1208 18:31:15.816656  429920 command_runner.go:130] > # stream_idle_timeout = ""
	I1208 18:31:15.816667  429920 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1208 18:31:15.816681  429920 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1208 18:31:15.816690  429920 command_runner.go:130] > # minutes.
	I1208 18:31:15.816697  429920 command_runner.go:130] > # stream_tls_cert = ""
	I1208 18:31:15.816710  429920 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1208 18:31:15.816724  429920 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1208 18:31:15.816732  429920 command_runner.go:130] > # stream_tls_key = ""
	I1208 18:31:15.816738  429920 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1208 18:31:15.816748  429920 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1208 18:31:15.816760  429920 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1208 18:31:15.816770  429920 command_runner.go:130] > # stream_tls_ca = ""
	I1208 18:31:15.816784  429920 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1208 18:31:15.816796  429920 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1208 18:31:15.816811  429920 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1208 18:31:15.816820  429920 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1208 18:31:15.816845  429920 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1208 18:31:15.816863  429920 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1208 18:31:15.816874  429920 command_runner.go:130] > [crio.runtime]
	I1208 18:31:15.816887  429920 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1208 18:31:15.816899  429920 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1208 18:31:15.816907  429920 command_runner.go:130] > # "nofile=1024:2048"
	I1208 18:31:15.816915  429920 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1208 18:31:15.816926  429920 command_runner.go:130] > # default_ulimits = [
	I1208 18:31:15.816935  429920 command_runner.go:130] > # ]
	I1208 18:31:15.816945  429920 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1208 18:31:15.816955  429920 command_runner.go:130] > # no_pivot = false
	I1208 18:31:15.816964  429920 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1208 18:31:15.816978  429920 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1208 18:31:15.816988  429920 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1208 18:31:15.816997  429920 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1208 18:31:15.817005  429920 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1208 18:31:15.817021  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 18:31:15.817030  429920 command_runner.go:130] > # conmon = ""
	I1208 18:31:15.817038  429920 command_runner.go:130] > # Cgroup setting for conmon
	I1208 18:31:15.817052  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1208 18:31:15.817059  429920 command_runner.go:130] > conmon_cgroup = "pod"
	I1208 18:31:15.817072  429920 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1208 18:31:15.817080  429920 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1208 18:31:15.817090  429920 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1208 18:31:15.817100  429920 command_runner.go:130] > # conmon_env = [
	I1208 18:31:15.817110  429920 command_runner.go:130] > # ]
	I1208 18:31:15.817119  429920 command_runner.go:130] > # Additional environment variables to set for all the
	I1208 18:31:15.817131  429920 command_runner.go:130] > # containers. These are overridden if set in the
	I1208 18:31:15.817143  429920 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1208 18:31:15.817153  429920 command_runner.go:130] > # default_env = [
	I1208 18:31:15.817159  429920 command_runner.go:130] > # ]
	I1208 18:31:15.817165  429920 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1208 18:31:15.817170  429920 command_runner.go:130] > # selinux = false
	I1208 18:31:15.817187  429920 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1208 18:31:15.817201  429920 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1208 18:31:15.817214  429920 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1208 18:31:15.817225  429920 command_runner.go:130] > # seccomp_profile = ""
	I1208 18:31:15.817237  429920 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1208 18:31:15.817248  429920 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1208 18:31:15.817255  429920 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1208 18:31:15.817265  429920 command_runner.go:130] > # which might increase security.
	I1208 18:31:15.817278  429920 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1208 18:31:15.817291  429920 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1208 18:31:15.817304  429920 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1208 18:31:15.817317  429920 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1208 18:31:15.817330  429920 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1208 18:31:15.817338  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:31:15.817344  429920 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1208 18:31:15.817357  429920 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1208 18:31:15.817369  429920 command_runner.go:130] > # the cgroup blockio controller.
	I1208 18:31:15.817379  429920 command_runner.go:130] > # blockio_config_file = ""
	I1208 18:31:15.817390  429920 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1208 18:31:15.817400  429920 command_runner.go:130] > # irqbalance daemon.
	I1208 18:31:15.817410  429920 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1208 18:31:15.817420  429920 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1208 18:31:15.817429  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:31:15.817436  429920 command_runner.go:130] > # rdt_config_file = ""
	I1208 18:31:15.817449  429920 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1208 18:31:15.817459  429920 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1208 18:31:15.817469  429920 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1208 18:31:15.817480  429920 command_runner.go:130] > # separate_pull_cgroup = ""
	I1208 18:31:15.817493  429920 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1208 18:31:15.817504  429920 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1208 18:31:15.817511  429920 command_runner.go:130] > # will be added.
	I1208 18:31:15.817517  429920 command_runner.go:130] > # default_capabilities = [
	I1208 18:31:15.817526  429920 command_runner.go:130] > # 	"CHOWN",
	I1208 18:31:15.817533  429920 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1208 18:31:15.817540  429920 command_runner.go:130] > # 	"FSETID",
	I1208 18:31:15.817547  429920 command_runner.go:130] > # 	"FOWNER",
	I1208 18:31:15.817636  429920 command_runner.go:130] > # 	"SETGID",
	I1208 18:31:15.817748  429920 command_runner.go:130] > # 	"SETUID",
	I1208 18:31:15.817769  429920 command_runner.go:130] > # 	"SETPCAP",
	I1208 18:31:15.817778  429920 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1208 18:31:15.817784  429920 command_runner.go:130] > # 	"KILL",
	I1208 18:31:15.817793  429920 command_runner.go:130] > # ]
	I1208 18:31:15.817807  429920 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1208 18:31:15.817820  429920 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1208 18:31:15.817832  429920 command_runner.go:130] > # add_inheritable_capabilities = true
	I1208 18:31:15.817847  429920 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1208 18:31:15.817860  429920 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 18:31:15.817870  429920 command_runner.go:130] > # default_sysctls = [
	I1208 18:31:15.817876  429920 command_runner.go:130] > # ]
	I1208 18:31:15.817887  429920 command_runner.go:130] > # List of devices on the host that a
	I1208 18:31:15.817899  429920 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1208 18:31:15.817907  429920 command_runner.go:130] > # allowed_devices = [
	I1208 18:31:15.817911  429920 command_runner.go:130] > # 	"/dev/fuse",
	I1208 18:31:15.817919  429920 command_runner.go:130] > # ]
	I1208 18:31:15.817928  429920 command_runner.go:130] > # List of additional devices. specified as
	I1208 18:31:15.817989  429920 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1208 18:31:15.817998  429920 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1208 18:31:15.818008  429920 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1208 18:31:15.818020  429920 command_runner.go:130] > # additional_devices = [
	I1208 18:31:15.818026  429920 command_runner.go:130] > # ]
	I1208 18:31:15.818038  429920 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1208 18:31:15.818047  429920 command_runner.go:130] > # cdi_spec_dirs = [
	I1208 18:31:15.818054  429920 command_runner.go:130] > # 	"/etc/cdi",
	I1208 18:31:15.818064  429920 command_runner.go:130] > # 	"/var/run/cdi",
	I1208 18:31:15.818070  429920 command_runner.go:130] > # ]
	I1208 18:31:15.818081  429920 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1208 18:31:15.818092  429920 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1208 18:31:15.818100  429920 command_runner.go:130] > # Defaults to false.
	I1208 18:31:15.818111  429920 command_runner.go:130] > # device_ownership_from_security_context = false
	I1208 18:31:15.818128  429920 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1208 18:31:15.818142  429920 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1208 18:31:15.818151  429920 command_runner.go:130] > # hooks_dir = [
	I1208 18:31:15.818161  429920 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1208 18:31:15.818168  429920 command_runner.go:130] > # ]
	I1208 18:31:15.818176  429920 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1208 18:31:15.818189  429920 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1208 18:31:15.818202  429920 command_runner.go:130] > # its default mounts from the following two files:
	I1208 18:31:15.818208  429920 command_runner.go:130] > #
	I1208 18:31:15.818222  429920 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1208 18:31:15.818235  429920 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1208 18:31:15.818247  429920 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1208 18:31:15.818254  429920 command_runner.go:130] > #
	I1208 18:31:15.818260  429920 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1208 18:31:15.818274  429920 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1208 18:31:15.818288  429920 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1208 18:31:15.818300  429920 command_runner.go:130] > #      only add mounts it finds in this file.
	I1208 18:31:15.818306  429920 command_runner.go:130] > #
	I1208 18:31:15.818317  429920 command_runner.go:130] > # default_mounts_file = ""
	I1208 18:31:15.818326  429920 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1208 18:31:15.818338  429920 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1208 18:31:15.818345  429920 command_runner.go:130] > # pids_limit = 0
	I1208 18:31:15.818360  429920 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1208 18:31:15.818374  429920 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1208 18:31:15.818388  429920 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1208 18:31:15.818404  429920 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1208 18:31:15.818413  429920 command_runner.go:130] > # log_size_max = -1
	I1208 18:31:15.818424  429920 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1208 18:31:15.818430  429920 command_runner.go:130] > # log_to_journald = false
	I1208 18:31:15.818440  429920 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1208 18:31:15.818468  429920 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1208 18:31:15.818480  429920 command_runner.go:130] > # Path to directory for container attach sockets.
	I1208 18:31:15.818492  429920 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1208 18:31:15.818507  429920 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1208 18:31:15.818516  429920 command_runner.go:130] > # bind_mount_prefix = ""
	I1208 18:31:15.818522  429920 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1208 18:31:15.818529  429920 command_runner.go:130] > # read_only = false
	I1208 18:31:15.818540  429920 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1208 18:31:15.818553  429920 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1208 18:31:15.818565  429920 command_runner.go:130] > # live configuration reload.
	I1208 18:31:15.818575  429920 command_runner.go:130] > # log_level = "info"
	I1208 18:31:15.818587  429920 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1208 18:31:15.818601  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:31:15.818610  429920 command_runner.go:130] > # log_filter = ""
	I1208 18:31:15.818620  429920 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1208 18:31:15.818635  429920 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1208 18:31:15.818646  429920 command_runner.go:130] > # separated by comma.
	I1208 18:31:15.818656  429920 command_runner.go:130] > # uid_mappings = ""
	I1208 18:31:15.818667  429920 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1208 18:31:15.818680  429920 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1208 18:31:15.818689  429920 command_runner.go:130] > # separated by comma.
	I1208 18:31:15.818696  429920 command_runner.go:130] > # gid_mappings = ""
	I1208 18:31:15.818705  429920 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1208 18:31:15.818719  429920 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 18:31:15.818730  429920 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 18:31:15.818742  429920 command_runner.go:130] > # minimum_mappable_uid = -1
	I1208 18:31:15.818756  429920 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1208 18:31:15.818769  429920 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1208 18:31:15.818779  429920 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1208 18:31:15.818786  429920 command_runner.go:130] > # minimum_mappable_gid = -1
	I1208 18:31:15.818796  429920 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1208 18:31:15.818810  429920 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1208 18:31:15.818822  429920 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1208 18:31:15.818829  429920 command_runner.go:130] > # ctr_stop_timeout = 30
	I1208 18:31:15.818842  429920 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1208 18:31:15.818860  429920 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1208 18:31:15.818868  429920 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1208 18:31:15.818875  429920 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1208 18:31:15.818885  429920 command_runner.go:130] > # drop_infra_ctr = true
	I1208 18:31:15.818899  429920 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1208 18:31:15.818911  429920 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1208 18:31:15.818928  429920 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1208 18:31:15.818938  429920 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1208 18:31:15.818948  429920 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1208 18:31:15.818955  429920 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1208 18:31:15.818963  429920 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1208 18:31:15.818979  429920 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1208 18:31:15.818989  429920 command_runner.go:130] > # pinns_path = ""
	I1208 18:31:15.819002  429920 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1208 18:31:15.819017  429920 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1208 18:31:15.819031  429920 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1208 18:31:15.819036  429920 command_runner.go:130] > # default_runtime = "runc"
	I1208 18:31:15.819043  429920 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1208 18:31:15.819054  429920 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1208 18:31:15.819074  429920 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1208 18:31:15.819085  429920 command_runner.go:130] > # creation as a file is not desired either.
	I1208 18:31:15.819101  429920 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1208 18:31:15.819113  429920 command_runner.go:130] > # the hostname is being managed dynamically.
	I1208 18:31:15.819122  429920 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1208 18:31:15.819126  429920 command_runner.go:130] > # ]
	I1208 18:31:15.819135  429920 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1208 18:31:15.819149  429920 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1208 18:31:15.819163  429920 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1208 18:31:15.819176  429920 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1208 18:31:15.819185  429920 command_runner.go:130] > #
	I1208 18:31:15.819193  429920 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1208 18:31:15.819205  429920 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1208 18:31:15.819212  429920 command_runner.go:130] > #  runtime_type = "oci"
	I1208 18:31:15.819217  429920 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1208 18:31:15.819225  429920 command_runner.go:130] > #  privileged_without_host_devices = false
	I1208 18:31:15.819236  429920 command_runner.go:130] > #  allowed_annotations = []
	I1208 18:31:15.819247  429920 command_runner.go:130] > # Where:
	I1208 18:31:15.819256  429920 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1208 18:31:15.819269  429920 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1208 18:31:15.819285  429920 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1208 18:31:15.819296  429920 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1208 18:31:15.819302  429920 command_runner.go:130] > #   in $PATH.
	I1208 18:31:15.819312  429920 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1208 18:31:15.819324  429920 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1208 18:31:15.819335  429920 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1208 18:31:15.819344  429920 command_runner.go:130] > #   state.
	I1208 18:31:15.819380  429920 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1208 18:31:15.819389  429920 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1208 18:31:15.819399  429920 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1208 18:31:15.819412  429920 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1208 18:31:15.819426  429920 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1208 18:31:15.819436  429920 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1208 18:31:15.819448  429920 command_runner.go:130] > #   The currently recognized values are:
	I1208 18:31:15.819462  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1208 18:31:15.819472  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1208 18:31:15.819485  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1208 18:31:15.819498  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1208 18:31:15.819513  429920 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1208 18:31:15.819527  429920 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1208 18:31:15.819537  429920 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1208 18:31:15.819551  429920 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1208 18:31:15.819560  429920 command_runner.go:130] > #   should be moved to the container's cgroup
	I1208 18:31:15.819566  429920 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1208 18:31:15.819578  429920 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1208 18:31:15.819588  429920 command_runner.go:130] > runtime_type = "oci"
	I1208 18:31:15.819597  429920 command_runner.go:130] > runtime_root = "/run/runc"
	I1208 18:31:15.819607  429920 command_runner.go:130] > runtime_config_path = ""
	I1208 18:31:15.819617  429920 command_runner.go:130] > monitor_path = ""
	I1208 18:31:15.819625  429920 command_runner.go:130] > monitor_cgroup = ""
	I1208 18:31:15.819635  429920 command_runner.go:130] > monitor_exec_cgroup = ""
	I1208 18:31:15.819666  429920 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1208 18:31:15.819677  429920 command_runner.go:130] > # running containers
	I1208 18:31:15.819688  429920 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1208 18:31:15.819701  429920 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1208 18:31:15.819715  429920 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1208 18:31:15.819724  429920 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1208 18:31:15.819733  429920 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1208 18:31:15.819738  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1208 18:31:15.819748  429920 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1208 18:31:15.819758  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1208 18:31:15.819769  429920 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1208 18:31:15.819780  429920 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1208 18:31:15.819793  429920 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1208 18:31:15.819805  429920 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1208 18:31:15.819817  429920 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1208 18:31:15.819827  429920 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1208 18:31:15.819844  429920 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1208 18:31:15.819867  429920 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1208 18:31:15.819885  429920 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1208 18:31:15.819900  429920 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1208 18:31:15.819906  429920 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1208 18:31:15.819919  429920 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1208 18:31:15.819930  429920 command_runner.go:130] > # Example:
	I1208 18:31:15.819942  429920 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1208 18:31:15.819953  429920 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1208 18:31:15.819965  429920 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1208 18:31:15.819976  429920 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1208 18:31:15.819985  429920 command_runner.go:130] > # cpuset = 0
	I1208 18:31:15.819990  429920 command_runner.go:130] > # cpushares = "0-1"
	I1208 18:31:15.819996  429920 command_runner.go:130] > # Where:
	I1208 18:31:15.820004  429920 command_runner.go:130] > # The workload name is workload-type.
	I1208 18:31:15.820019  429920 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1208 18:31:15.820032  429920 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1208 18:31:15.820045  429920 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1208 18:31:15.820060  429920 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1208 18:31:15.820072  429920 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1208 18:31:15.820078  429920 command_runner.go:130] > # 
	I1208 18:31:15.820086  429920 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1208 18:31:15.820091  429920 command_runner.go:130] > #
	I1208 18:31:15.820105  429920 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1208 18:31:15.820118  429920 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1208 18:31:15.820132  429920 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1208 18:31:15.820145  429920 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1208 18:31:15.820157  429920 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1208 18:31:15.820164  429920 command_runner.go:130] > [crio.image]
	I1208 18:31:15.820171  429920 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1208 18:31:15.820182  429920 command_runner.go:130] > # default_transport = "docker://"
	I1208 18:31:15.820196  429920 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1208 18:31:15.820210  429920 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1208 18:31:15.820220  429920 command_runner.go:130] > # global_auth_file = ""
	I1208 18:31:15.820229  429920 command_runner.go:130] > # The image used to instantiate infra containers.
	I1208 18:31:15.820240  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:31:15.820249  429920 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1208 18:31:15.820262  429920 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1208 18:31:15.820275  429920 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1208 18:31:15.820284  429920 command_runner.go:130] > # This option supports live configuration reload.
	I1208 18:31:15.820295  429920 command_runner.go:130] > # pause_image_auth_file = ""
	I1208 18:31:15.820307  429920 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1208 18:31:15.820320  429920 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1208 18:31:15.820332  429920 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1208 18:31:15.820340  429920 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1208 18:31:15.820345  429920 command_runner.go:130] > # pause_command = "/pause"
	I1208 18:31:15.820357  429920 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1208 18:31:15.820369  429920 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1208 18:31:15.820383  429920 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1208 18:31:15.820396  429920 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1208 18:31:15.820408  429920 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1208 18:31:15.820418  429920 command_runner.go:130] > # signature_policy = ""
	I1208 18:31:15.820438  429920 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1208 18:31:15.820448  429920 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1208 18:31:15.820459  429920 command_runner.go:130] > # changing them here.
	I1208 18:31:15.820463  429920 command_runner.go:130] > # insecure_registries = [
	I1208 18:31:15.820466  429920 command_runner.go:130] > # ]
	I1208 18:31:15.820475  429920 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1208 18:31:15.820482  429920 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1208 18:31:15.820487  429920 command_runner.go:130] > # image_volumes = "mkdir"
	I1208 18:31:15.820494  429920 command_runner.go:130] > # Temporary directory to use for storing big files
	I1208 18:31:15.820498  429920 command_runner.go:130] > # big_files_temporary_dir = ""
	I1208 18:31:15.820506  429920 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1208 18:31:15.820516  429920 command_runner.go:130] > # CNI plugins.
	I1208 18:31:15.820523  429920 command_runner.go:130] > [crio.network]
	I1208 18:31:15.820536  429920 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1208 18:31:15.820548  429920 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1208 18:31:15.820559  429920 command_runner.go:130] > # cni_default_network = ""
	I1208 18:31:15.820569  429920 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1208 18:31:15.820580  429920 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1208 18:31:15.820587  429920 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1208 18:31:15.820597  429920 command_runner.go:130] > # plugin_dirs = [
	I1208 18:31:15.820604  429920 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1208 18:31:15.820608  429920 command_runner.go:130] > # ]
	I1208 18:31:15.820615  429920 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1208 18:31:15.820620  429920 command_runner.go:130] > [crio.metrics]
	I1208 18:31:15.820628  429920 command_runner.go:130] > # Globally enable or disable metrics support.
	I1208 18:31:15.820635  429920 command_runner.go:130] > # enable_metrics = false
	I1208 18:31:15.820645  429920 command_runner.go:130] > # Specify enabled metrics collectors.
	I1208 18:31:15.820656  429920 command_runner.go:130] > # Per default all metrics are enabled.
	I1208 18:31:15.820667  429920 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1208 18:31:15.820681  429920 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1208 18:31:15.820695  429920 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1208 18:31:15.820706  429920 command_runner.go:130] > # metrics_collectors = [
	I1208 18:31:15.820716  429920 command_runner.go:130] > # 	"operations",
	I1208 18:31:15.820726  429920 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1208 18:31:15.820737  429920 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1208 18:31:15.820745  429920 command_runner.go:130] > # 	"operations_errors",
	I1208 18:31:15.820752  429920 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1208 18:31:15.820763  429920 command_runner.go:130] > # 	"image_pulls_by_name",
	I1208 18:31:15.820773  429920 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1208 18:31:15.820783  429920 command_runner.go:130] > # 	"image_pulls_failures",
	I1208 18:31:15.820795  429920 command_runner.go:130] > # 	"image_pulls_successes",
	I1208 18:31:15.820804  429920 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1208 18:31:15.820814  429920 command_runner.go:130] > # 	"image_layer_reuse",
	I1208 18:31:15.820823  429920 command_runner.go:130] > # 	"containers_oom_total",
	I1208 18:31:15.820833  429920 command_runner.go:130] > # 	"containers_oom",
	I1208 18:31:15.820841  429920 command_runner.go:130] > # 	"processes_defunct",
	I1208 18:31:15.820851  429920 command_runner.go:130] > # 	"operations_total",
	I1208 18:31:15.820859  429920 command_runner.go:130] > # 	"operations_latency_seconds",
	I1208 18:31:15.820871  429920 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1208 18:31:15.820882  429920 command_runner.go:130] > # 	"operations_errors_total",
	I1208 18:31:15.820893  429920 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1208 18:31:15.820904  429920 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1208 18:31:15.820914  429920 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1208 18:31:15.820925  429920 command_runner.go:130] > # 	"image_pulls_success_total",
	I1208 18:31:15.820936  429920 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1208 18:31:15.820947  429920 command_runner.go:130] > # 	"containers_oom_count_total",
	I1208 18:31:15.820957  429920 command_runner.go:130] > # ]
	I1208 18:31:15.820970  429920 command_runner.go:130] > # The port on which the metrics server will listen.
	I1208 18:31:15.820980  429920 command_runner.go:130] > # metrics_port = 9090
	I1208 18:31:15.820993  429920 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1208 18:31:15.821001  429920 command_runner.go:130] > # metrics_socket = ""
	I1208 18:31:15.821013  429920 command_runner.go:130] > # The certificate for the secure metrics server.
	I1208 18:31:15.821027  429920 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1208 18:31:15.821041  429920 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1208 18:31:15.821054  429920 command_runner.go:130] > # certificate on any modification event.
	I1208 18:31:15.821065  429920 command_runner.go:130] > # metrics_cert = ""
	I1208 18:31:15.821075  429920 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1208 18:31:15.821087  429920 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1208 18:31:15.821098  429920 command_runner.go:130] > # metrics_key = ""
	I1208 18:31:15.821111  429920 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1208 18:31:15.821121  429920 command_runner.go:130] > [crio.tracing]
	I1208 18:31:15.821134  429920 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1208 18:31:15.821145  429920 command_runner.go:130] > # enable_tracing = false
	I1208 18:31:15.821155  429920 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1208 18:31:15.821167  429920 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1208 18:31:15.821177  429920 command_runner.go:130] > # Number of samples to collect per million spans.
	I1208 18:31:15.821189  429920 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1208 18:31:15.821202  429920 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1208 18:31:15.821212  429920 command_runner.go:130] > [crio.stats]
	I1208 18:31:15.821224  429920 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1208 18:31:15.821235  429920 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1208 18:31:15.821244  429920 command_runner.go:130] > # stats_collection_period = 0
	I1208 18:31:15.821326  429920 cni.go:84] Creating CNI manager for ""
	I1208 18:31:15.821336  429920 cni.go:136] 2 nodes found, recommending kindnet
	I1208 18:31:15.821355  429920 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1208 18:31:15.821389  429920 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985452 NodeName:multinode-985452-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 18:31:15.821534  429920 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-985452-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 18:31:15.821605  429920 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-985452-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1208 18:31:15.821687  429920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1208 18:31:15.829321  429920 command_runner.go:130] > kubeadm
	I1208 18:31:15.829342  429920 command_runner.go:130] > kubectl
	I1208 18:31:15.829350  429920 command_runner.go:130] > kubelet
	I1208 18:31:15.830023  429920 binaries.go:44] Found k8s binaries, skipping transfer
	I1208 18:31:15.830091  429920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1208 18:31:15.837892  429920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1208 18:31:15.853430  429920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 18:31:15.870291  429920 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1208 18:31:15.873742  429920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 18:31:15.884742  429920 host.go:66] Checking if "multinode-985452" exists ...
	I1208 18:31:15.885007  429920 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:31:15.884999  429920 start.go:304] JoinCluster: &{Name:multinode-985452 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-985452 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:31:15.885086  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1208 18:31:15.885129  429920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:31:15.902296  429920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:31:16.042897  429920 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token srtspd.6xioy98l3wdas898 --discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 
	I1208 18:31:16.047226  429920 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1208 18:31:16.047265  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token srtspd.6xioy98l3wdas898 --discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-985452-m02"
	I1208 18:31:16.081279  429920 command_runner.go:130] ! W1208 18:31:16.080802    1111 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1208 18:31:16.108823  429920 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1208 18:31:16.175706  429920 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 18:31:18.305211  429920 command_runner.go:130] > [preflight] Running pre-flight checks
	I1208 18:31:18.305236  429920 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1208 18:31:18.305243  429920 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1208 18:31:18.305248  429920 command_runner.go:130] > OS: Linux
	I1208 18:31:18.305255  429920 command_runner.go:130] > CGROUPS_CPU: enabled
	I1208 18:31:18.305260  429920 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1208 18:31:18.305266  429920 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1208 18:31:18.305270  429920 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1208 18:31:18.305275  429920 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1208 18:31:18.305280  429920 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1208 18:31:18.305287  429920 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1208 18:31:18.305294  429920 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1208 18:31:18.305299  429920 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1208 18:31:18.305306  429920 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1208 18:31:18.305315  429920 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1208 18:31:18.305323  429920 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 18:31:18.305331  429920 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 18:31:18.305336  429920 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1208 18:31:18.305347  429920 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1208 18:31:18.305354  429920 command_runner.go:130] > This node has joined the cluster:
	I1208 18:31:18.305360  429920 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1208 18:31:18.305368  429920 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1208 18:31:18.305378  429920 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1208 18:31:18.305398  429920 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token srtspd.6xioy98l3wdas898 --discovery-token-ca-cert-hash sha256:1c9f3d84c6bfbc532e2c32f67f1098748d80bb69584571853fbf90a756bcc801 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-985452-m02": (2.258121046s)
	I1208 18:31:18.305427  429920 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1208 18:31:18.460035  429920 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1208 18:31:18.460133  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b minikube.k8s.io/name=multinode-985452 minikube.k8s.io/updated_at=2023_12_08T18_31_18_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 18:31:18.532368  429920 command_runner.go:130] > node/multinode-985452-m02 labeled
	I1208 18:31:18.534940  429920 start.go:306] JoinCluster complete in 2.649936035s
	I1208 18:31:18.534977  429920 cni.go:84] Creating CNI manager for ""
	I1208 18:31:18.534985  429920 cni.go:136] 2 nodes found, recommending kindnet
	I1208 18:31:18.535028  429920 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 18:31:18.538544  429920 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1208 18:31:18.538573  429920 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1208 18:31:18.538583  429920 command_runner.go:130] > Device: 34h/52d	Inode: 1303407     Links: 1
	I1208 18:31:18.538589  429920 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 18:31:18.538595  429920 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1208 18:31:18.538600  429920 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1208 18:31:18.538605  429920 command_runner.go:130] > Change: 2023-12-08 18:10:37.800699976 +0000
	I1208 18:31:18.538610  429920 command_runner.go:130] >  Birth: 2023-12-08 18:10:37.776697531 +0000
	I1208 18:31:18.538689  429920 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1208 18:31:18.538702  429920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1208 18:31:18.555476  429920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 18:31:18.763743  429920 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1208 18:31:18.767425  429920 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1208 18:31:18.769752  429920 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1208 18:31:18.784000  429920 command_runner.go:130] > daemonset.apps/kindnet configured
	I1208 18:31:18.788881  429920 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:31:18.789112  429920 kapi.go:59] client config for multinode-985452: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:31:18.789412  429920 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1208 18:31:18.789424  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:18.789432  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:18.789438  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:18.791283  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:18.791301  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:18.791312  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:18.791318  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:18.791323  429920 round_trippers.go:580]     Content-Length: 291
	I1208 18:31:18.791329  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:18 GMT
	I1208 18:31:18.791334  429920 round_trippers.go:580]     Audit-Id: 0a6e6878-5a20-488d-8c54-b338b65d65b0
	I1208 18:31:18.791339  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:18.791347  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:18.791375  429920 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9cad5cc5-6d14-4fb9-8d70-bbd3db2a56bf","resourceVersion":"445","creationTimestamp":"2023-12-08T18:30:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1208 18:31:18.791462  429920 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-985452" context rescaled to 1 replicas
	I1208 18:31:18.791488  429920 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1208 18:31:18.793271  429920 out.go:177] * Verifying Kubernetes components...
	I1208 18:31:18.794703  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:31:18.805430  429920 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:31:18.805644  429920 kapi.go:59] client config for multinode-985452: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.crt", KeyFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/profiles/multinode-985452/client.key", CAFile:"/home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 18:31:18.805882  429920 node_ready.go:35] waiting up to 6m0s for node "multinode-985452-m02" to be "Ready" ...
	I1208 18:31:18.805943  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:18.805951  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:18.805958  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:18.805964  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:18.807976  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:18.808001  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:18.808012  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:18.808018  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:18.808027  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:18 GMT
	I1208 18:31:18.808032  429920 round_trippers.go:580]     Audit-Id: 2a74b266-8ead-408d-9a7f-1fa2b97e03a6
	I1208 18:31:18.808040  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:18.808046  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:18.808206  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"486","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1208 18:31:18.808576  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:18.808590  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:18.808597  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:18.808603  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:18.810250  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:18.810264  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:18.810270  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:18.810276  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:18.810281  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:18 GMT
	I1208 18:31:18.810286  429920 round_trippers.go:580]     Audit-Id: 30b00a58-212e-44ec-ac48-11756d737e39
	I1208 18:31:18.810292  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:18.810297  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:18.810401  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"486","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1208 18:31:19.311500  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:19.311528  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:19.311539  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:19.311548  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:19.314096  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:19.314118  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:19.314125  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:19.314131  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:19.314137  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:19 GMT
	I1208 18:31:19.314142  429920 round_trippers.go:580]     Audit-Id: d4184803-11a7-4ef3-803a-415de0c51b64
	I1208 18:31:19.314147  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:19.314152  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:19.314291  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"486","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1208 18:31:19.811687  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:19.811710  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:19.811719  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:19.811727  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:19.813759  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:19.813781  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:19.813792  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:19.813800  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:19.813808  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:19 GMT
	I1208 18:31:19.813817  429920 round_trippers.go:580]     Audit-Id: 01ed71bd-d80e-4291-b34a-70c2a0ee617a
	I1208 18:31:19.813825  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:19.813841  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:19.813954  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"486","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1208 18:31:20.311652  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:20.311676  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.311684  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.311690  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.314168  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:20.314188  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.314195  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.314201  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.314207  429920 round_trippers.go:580]     Audit-Id: 3f566cb5-21e4-4671-9290-6e6fee6ee5e5
	I1208 18:31:20.314212  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.314217  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.314228  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.314427  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"502","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1208 18:31:20.314767  429920 node_ready.go:49] node "multinode-985452-m02" has status "Ready":"True"
	I1208 18:31:20.314786  429920 node_ready.go:38] duration metric: took 1.508889274s waiting for node "multinode-985452-m02" to be "Ready" ...
	I1208 18:31:20.314798  429920 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:31:20.314864  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1208 18:31:20.314874  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.314884  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.314893  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.318203  429920 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1208 18:31:20.318234  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.318245  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.318254  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.318262  429920 round_trippers.go:580]     Audit-Id: 934d7627-39bf-44f1-b70e-95e006b3c8e4
	I1208 18:31:20.318270  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.318282  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.318290  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.318860  429920 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"441","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1208 18:31:20.321035  429920 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.321120  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-q28mc
	I1208 18:31:20.321131  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.321138  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.321144  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.323095  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.323111  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.323120  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.323126  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.323131  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.323136  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.323141  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.323146  429920 round_trippers.go:580]     Audit-Id: 70707b15-a535-43f4-ae7b-26ef7330ad09
	I1208 18:31:20.323283  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-q28mc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"79df6371-4a56-4034-8e15-947b595ac5bb","resourceVersion":"441","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0908a053-82c0-4e53-9210-3828bdbe3681","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0908a053-82c0-4e53-9210-3828bdbe3681\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1208 18:31:20.323817  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:20.323833  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.323846  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.323857  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.325486  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.325504  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.325513  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.325521  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.325529  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.325541  429920 round_trippers.go:580]     Audit-Id: 3c11d4b3-7130-4447-be9b-257bbec08f6a
	I1208 18:31:20.325551  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.325563  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.325686  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:20.325989  429920 pod_ready.go:92] pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:20.326005  429920 pod_ready.go:81] duration metric: took 4.948876ms waiting for pod "coredns-5dd5756b68-q28mc" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.326013  429920 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.326063  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985452
	I1208 18:31:20.326070  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.326077  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.326083  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.327614  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.327633  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.327643  429920 round_trippers.go:580]     Audit-Id: 4e7efef5-cad6-40ee-8be5-752f59d97761
	I1208 18:31:20.327650  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.327658  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.327672  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.327680  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.327690  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.327781  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985452","namespace":"kube-system","uid":"f7cc6b87-daec-4ba9-ad97-ae70c35b2022","resourceVersion":"311","creationTimestamp":"2023-12-08T18:30:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8903c2d6c0760f4e3d39229b9b6a1b8b","kubernetes.io/config.mirror":"8903c2d6c0760f4e3d39229b9b6a1b8b","kubernetes.io/config.seen":"2023-12-08T18:30:13.605165626Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1208 18:31:20.328162  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:20.328177  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.328186  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.328193  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.329676  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.329694  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.329709  429920 round_trippers.go:580]     Audit-Id: 014527df-d488-43d8-937a-c18c8f6b69f6
	I1208 18:31:20.329715  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.329724  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.329732  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.329746  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.329755  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.329886  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:20.330248  429920 pod_ready.go:92] pod "etcd-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:20.330267  429920 pod_ready.go:81] duration metric: took 4.247228ms waiting for pod "etcd-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.330286  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.330343  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985452
	I1208 18:31:20.330354  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.330365  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.330375  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.332063  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.332084  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.332093  429920 round_trippers.go:580]     Audit-Id: 392aef35-c29e-4117-a234-dec719c76110
	I1208 18:31:20.332100  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.332108  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.332117  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.332129  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.332140  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.332292  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985452","namespace":"kube-system","uid":"4453075e-130b-4948-ba80-8df11bbde032","resourceVersion":"314","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"1d9f2c65662e1a2b368034f468091c6f","kubernetes.io/config.mirror":"1d9f2c65662e1a2b368034f468091c6f","kubernetes.io/config.seen":"2023-12-08T18:30:19.727611796Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1208 18:31:20.332855  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:20.332873  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.332884  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.332893  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.334285  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.334299  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.334306  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.334315  429920 round_trippers.go:580]     Audit-Id: 6566c28e-4c90-4c0b-bc57-a051ba885738
	I1208 18:31:20.334322  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.334330  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.334345  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.334354  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.334475  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:20.334786  429920 pod_ready.go:92] pod "kube-apiserver-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:20.334800  429920 pod_ready.go:81] duration metric: took 4.504472ms waiting for pod "kube-apiserver-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.334808  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.334848  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985452
	I1208 18:31:20.334856  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.334863  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.334869  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.336338  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.336357  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.336366  429920 round_trippers.go:580]     Audit-Id: 8645c57b-60ce-44f8-bfc3-a0f4d5e32f1a
	I1208 18:31:20.336374  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.336381  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.336390  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.336400  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.336411  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.336515  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985452","namespace":"kube-system","uid":"4567aff3-4497-4c0b-a563-789999efb852","resourceVersion":"319","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"882fad5af594d8fc4750809e9dfef444","kubernetes.io/config.mirror":"882fad5af594d8fc4750809e9dfef444","kubernetes.io/config.seen":"2023-12-08T18:30:19.727613270Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1208 18:31:20.336891  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:20.336902  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.336912  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.336921  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.338326  429920 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1208 18:31:20.338340  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.338346  429920 round_trippers.go:580]     Audit-Id: 4ccc024f-a2ff-4159-8d0a-0ddabf77ea69
	I1208 18:31:20.338351  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.338357  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.338361  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.338367  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.338375  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.338503  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:20.338745  429920 pod_ready.go:92] pod "kube-controller-manager-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:20.338758  429920 pod_ready.go:81] duration metric: took 3.944624ms waiting for pod "kube-controller-manager-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.338766  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndp9r" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.512152  429920 request.go:629] Waited for 173.328583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndp9r
	I1208 18:31:20.512251  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndp9r
	I1208 18:31:20.512263  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.512275  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.512286  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.514781  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:20.514809  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.514819  429920 round_trippers.go:580]     Audit-Id: 2d228a05-120a-455a-a0eb-cf13d4eda102
	I1208 18:31:20.514827  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.514835  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.514842  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.514849  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.514857  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.515034  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ndp9r","generateName":"kube-proxy-","namespace":"kube-system","uid":"1deaf054-3d34-4150-a31d-c7c29577feab","resourceVersion":"496","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3d5b21b6-2ec5-4510-b9cc-91174bf753f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5b21b6-2ec5-4510-b9cc-91174bf753f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1208 18:31:20.711740  429920 request.go:629] Waited for 196.262044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:20.711800  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452-m02
	I1208 18:31:20.711805  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.711813  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.711819  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.714309  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:20.714355  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.714368  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.714381  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.714391  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.714404  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.714417  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.714429  429920 round_trippers.go:580]     Audit-Id: af7aad5a-7b7e-45e3-93ab-ee290fa3cca4
	I1208 18:31:20.714553  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452-m02","uid":"5d7a1379-1e93-4b07-a04c-01e886ab58aa","resourceVersion":"502","creationTimestamp":"2023-12-08T18:31:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_08T18_31_18_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:31:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1208 18:31:20.714879  429920 pod_ready.go:92] pod "kube-proxy-ndp9r" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:20.714898  429920 pod_ready.go:81] duration metric: took 376.125311ms waiting for pod "kube-proxy-ndp9r" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.714912  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wf8gr" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:20.912353  429920 request.go:629] Waited for 197.35031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf8gr
	I1208 18:31:20.912419  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wf8gr
	I1208 18:31:20.912424  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:20.912433  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:20.912440  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:20.914731  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:20.914757  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:20.914766  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:20 GMT
	I1208 18:31:20.914774  429920 round_trippers.go:580]     Audit-Id: 2af40bfb-169a-4dce-91e0-eb441e85f7e8
	I1208 18:31:20.914781  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:20.914789  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:20.914796  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:20.914805  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:20.914937  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wf8gr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f5b56b5d-7c2d-4dd2-8152-59d68bf94428","resourceVersion":"410","creationTimestamp":"2023-12-08T18:30:32Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"3d5b21b6-2ec5-4510-b9cc-91174bf753f2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3d5b21b6-2ec5-4510-b9cc-91174bf753f2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1208 18:31:21.111681  429920 request.go:629] Waited for 196.282219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:21.111754  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:21.111765  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:21.111773  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:21.111779  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:21.114052  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:21.114075  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:21.114086  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:21.114095  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:21.114104  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:21.114113  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:21.114125  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:21 GMT
	I1208 18:31:21.114136  429920 round_trippers.go:580]     Audit-Id: 42b0d547-311b-44ce-8b9d-13bc5940e95a
	I1208 18:31:21.114248  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:21.114643  429920 pod_ready.go:92] pod "kube-proxy-wf8gr" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:21.114662  429920 pod_ready.go:81] duration metric: took 399.741808ms waiting for pod "kube-proxy-wf8gr" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:21.114675  429920 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:21.312093  429920 request.go:629] Waited for 197.336742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985452
	I1208 18:31:21.312219  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985452
	I1208 18:31:21.312229  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:21.312237  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:21.312245  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:21.314528  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:21.314550  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:21.314560  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:21.314568  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:21.314575  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:21.314584  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:21.314593  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:21 GMT
	I1208 18:31:21.314606  429920 round_trippers.go:580]     Audit-Id: 475d5be1-cd6c-427a-b609-401e6805ae51
	I1208 18:31:21.314738  429920 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985452","namespace":"kube-system","uid":"0e7e0dab-442a-4004-94ce-17e535110819","resourceVersion":"339","creationTimestamp":"2023-12-08T18:30:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"600c961249358e713a27cc452ad5a264","kubernetes.io/config.mirror":"600c961249358e713a27cc452ad5a264","kubernetes.io/config.seen":"2023-12-08T18:30:19.727614659Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-08T18:30:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1208 18:31:21.512486  429920 request.go:629] Waited for 197.344058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:21.512555  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-985452
	I1208 18:31:21.512560  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:21.512571  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:21.512577  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:21.514893  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:21.514918  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:21.514929  429920 round_trippers.go:580]     Audit-Id: c3cd716a-d8f0-4379-a2ef-e48c0fadb9d1
	I1208 18:31:21.514938  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:21.514945  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:21.514951  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:21.514959  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:21.514966  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:21 GMT
	I1208 18:31:21.515099  429920 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-08T18:30:16Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1208 18:31:21.515465  429920 pod_ready.go:92] pod "kube-scheduler-multinode-985452" in "kube-system" namespace has status "Ready":"True"
	I1208 18:31:21.515485  429920 pod_ready.go:81] duration metric: took 400.800927ms waiting for pod "kube-scheduler-multinode-985452" in "kube-system" namespace to be "Ready" ...
	I1208 18:31:21.515496  429920 pod_ready.go:38] duration metric: took 1.200684416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1208 18:31:21.515522  429920 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 18:31:21.515570  429920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:31:21.526414  429920 system_svc.go:56] duration metric: took 10.88551ms WaitForService to wait for kubelet.
	I1208 18:31:21.526440  429920 kubeadm.go:581] duration metric: took 2.73492979s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1208 18:31:21.526481  429920 node_conditions.go:102] verifying NodePressure condition ...
	I1208 18:31:21.711832  429920 request.go:629] Waited for 185.272771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1208 18:31:21.711890  429920 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1208 18:31:21.711895  429920 round_trippers.go:469] Request Headers:
	I1208 18:31:21.711903  429920 round_trippers.go:473]     Accept: application/json, */*
	I1208 18:31:21.711910  429920 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1208 18:31:21.714803  429920 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1208 18:31:21.714824  429920 round_trippers.go:577] Response Headers:
	I1208 18:31:21.714831  429920 round_trippers.go:580]     Cache-Control: no-cache, private
	I1208 18:31:21.714837  429920 round_trippers.go:580]     Content-Type: application/json
	I1208 18:31:21.714843  429920 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 78eb7dd8-c533-4e1f-9ab2-705e270d8892
	I1208 18:31:21.714848  429920 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 00590696-d90e-49ec-9b8f-56de9fd4e566
	I1208 18:31:21.714854  429920 round_trippers.go:580]     Date: Fri, 08 Dec 2023 18:31:21 GMT
	I1208 18:31:21.714859  429920 round_trippers.go:580]     Audit-Id: 5bae65a5-9111-4448-aacb-f743ccd1d7a1
	I1208 18:31:21.715051  429920 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"multinode-985452","uid":"2f6ca656-bf86-49cc-a047-6bca09cea1e5","resourceVersion":"447","creationTimestamp":"2023-12-08T18:30:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985452","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4117b3e3d296a64e59281c5525848e6479e0626b","minikube.k8s.io/name":"multinode-985452","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_08T18_30_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12720 chars]
	I1208 18:31:21.715613  429920 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:31:21.715631  429920 node_conditions.go:123] node cpu capacity is 8
	I1208 18:31:21.715643  429920 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1208 18:31:21.715647  429920 node_conditions.go:123] node cpu capacity is 8
	I1208 18:31:21.715653  429920 node_conditions.go:105] duration metric: took 189.167887ms to run NodePressure ...
	I1208 18:31:21.715664  429920 start.go:228] waiting for startup goroutines ...
	I1208 18:31:21.715690  429920 start.go:242] writing updated cluster config ...
	I1208 18:31:21.715961  429920 ssh_runner.go:195] Run: rm -f paused
	I1208 18:31:21.762678  429920 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1208 18:31:21.765534  429920 out.go:177] * Done! kubectl is now configured to use "multinode-985452" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 08 18:31:03 multinode-985452 crio[961]: time="2023-12-08 18:31:03.925231683Z" level=info msg="Starting container: b94238b534f74250b77755e6830c5d94e96869cf882c07f3f1f87e008fb68586" id=a237ec1e-f6af-40d5-be57-71f2347f57d9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 18:31:03 multinode-985452 crio[961]: time="2023-12-08 18:31:03.928704988Z" level=info msg="Created container 2e29907e0cf77649d10f21417bce8ac576c0710d075becf7d6521b6c243d9e15: kube-system/coredns-5dd5756b68-q28mc/coredns" id=97a1af40-ab4b-41be-b9b0-d4f8038e87c5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 18:31:03 multinode-985452 crio[961]: time="2023-12-08 18:31:03.929199891Z" level=info msg="Starting container: 2e29907e0cf77649d10f21417bce8ac576c0710d075becf7d6521b6c243d9e15" id=ff4a86ac-3036-4307-a611-b6d2a3d4e71e name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 18:31:03 multinode-985452 crio[961]: time="2023-12-08 18:31:03.934707788Z" level=info msg="Started container" PID=2336 containerID=b94238b534f74250b77755e6830c5d94e96869cf882c07f3f1f87e008fb68586 description=kube-system/storage-provisioner/storage-provisioner id=a237ec1e-f6af-40d5-be57-71f2347f57d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5ce8411d86bd0d8a879e114d2d619fe86840b30e2c0b63f0287f6e5162e0b62
	Dec 08 18:31:03 multinode-985452 crio[961]: time="2023-12-08 18:31:03.937807621Z" level=info msg="Started container" PID=2346 containerID=2e29907e0cf77649d10f21417bce8ac576c0710d075becf7d6521b6c243d9e15 description=kube-system/coredns-5dd5756b68-q28mc/coredns id=ff4a86ac-3036-4307-a611-b6d2a3d4e71e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a71d87a20b1583fd53bd0cb961f04dda0eacfa4f6ec6b4c17103b02079fd6ef6
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.741887760Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-mb9gz/POD" id=4f553204-f3ce-41ba-aae6-5f44b05eccad name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.741960346Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.756156481Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-mb9gz Namespace:default ID:83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858 UID:24831e1c-e6cd-47db-afdf-87299e26dfb1 NetNS:/var/run/netns/a5166198-c4c4-4de5-9f03-782561813226 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.756192246Z" level=info msg="Adding pod default_busybox-5bc68d56bd-mb9gz to CNI network \"kindnet\" (type=ptp)"
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.764816382Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-mb9gz Namespace:default ID:83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858 UID:24831e1c-e6cd-47db-afdf-87299e26dfb1 NetNS:/var/run/netns/a5166198-c4c4-4de5-9f03-782561813226 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.764938156Z" level=info msg="Checking pod default_busybox-5bc68d56bd-mb9gz for CNI network kindnet (type=ptp)"
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.781830465Z" level=info msg="Ran pod sandbox 83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858 with infra container: default/busybox-5bc68d56bd-mb9gz/POD" id=4f553204-f3ce-41ba-aae6-5f44b05eccad name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.782857522Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a96c2287-379c-4ea0-9829-0cb906aab4ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.783080602Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=a96c2287-379c-4ea0-9829-0cb906aab4ad name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.783895980Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=bd7d5490-415a-462f-862b-b65fd041f034 name=/runtime.v1.ImageService/PullImage
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.788207033Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 08 18:31:22 multinode-985452 crio[961]: time="2023-12-08 18:31:22.949520876Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.357224047Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=bd7d5490-415a-462f-862b-b65fd041f034 name=/runtime.v1.ImageService/PullImage
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.358213088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=4b23a79f-4d23-4696-a033-7b57700887b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.358971127Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4b23a79f-4d23-4696-a033-7b57700887b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.359830215Z" level=info msg="Creating container: default/busybox-5bc68d56bd-mb9gz/busybox" id=2d8e97af-3000-49c9-a2d6-85c202ed8aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.359934350Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.435937677Z" level=info msg="Created container f810649f0953191c086bc86fa3469c9c769f76aa6214c4a8ae9af7d1838a5817: default/busybox-5bc68d56bd-mb9gz/busybox" id=2d8e97af-3000-49c9-a2d6-85c202ed8aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.436754104Z" level=info msg="Starting container: f810649f0953191c086bc86fa3469c9c769f76aa6214c4a8ae9af7d1838a5817" id=915d9767-900e-40d2-9f88-2330e1c854ad name=/runtime.v1.RuntimeService/StartContainer
	Dec 08 18:31:23 multinode-985452 crio[961]: time="2023-12-08 18:31:23.446558594Z" level=info msg="Started container" PID=2520 containerID=f810649f0953191c086bc86fa3469c9c769f76aa6214c4a8ae9af7d1838a5817 description=default/busybox-5bc68d56bd-mb9gz/busybox id=915d9767-900e-40d2-9f88-2330e1c854ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f810649f09531       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   83310835ec355       busybox-5bc68d56bd-mb9gz
	2e29907e0cf77       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      23 seconds ago       Running             coredns                   0                   a71d87a20b158       coredns-5dd5756b68-q28mc
	b94238b534f74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      23 seconds ago       Running             storage-provisioner       0                   d5ce8411d86bd       storage-provisioner
	0a8f773c433ba       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      54 seconds ago       Running             kube-proxy                0                   e329fa66f6c49       kube-proxy-wf8gr
	de6d0ad655dc5       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      54 seconds ago       Running             kindnet-cni               0                   54ee1a24d166a       kindnet-nfbjn
	832d650180c47       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   09f818ce5251f       kube-apiserver-multinode-985452
	887f601860b30       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   64eff9d9279ea       kube-controller-manager-multinode-985452
	8292d3854f354       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   ac47d05688de3       etcd-multinode-985452
	f5eecf8bccaf0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   5ea6666e4fb94       kube-scheduler-multinode-985452
	
	* 
	* ==> coredns [2e29907e0cf77649d10f21417bce8ac576c0710d075becf7d6521b6c243d9e15] <==
	* [INFO] 10.244.0.3:54035 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092059s
	[INFO] 10.244.1.2:44192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139354s
	[INFO] 10.244.1.2:45969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00163514s
	[INFO] 10.244.1.2:34977 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088567s
	[INFO] 10.244.1.2:48863 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067875s
	[INFO] 10.244.1.2:51524 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001177684s
	[INFO] 10.244.1.2:33933 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079291s
	[INFO] 10.244.1.2:42803 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065265s
	[INFO] 10.244.1.2:58789 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130817s
	[INFO] 10.244.0.3:57157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113282s
	[INFO] 10.244.0.3:53689 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077083s
	[INFO] 10.244.0.3:51802 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000040522s
	[INFO] 10.244.0.3:36560 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162673s
	[INFO] 10.244.1.2:48231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117351s
	[INFO] 10.244.1.2:54861 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008245s
	[INFO] 10.244.1.2:53434 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005173s
	[INFO] 10.244.1.2:57492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061578s
	[INFO] 10.244.0.3:49393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106559s
	[INFO] 10.244.0.3:41588 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094207s
	[INFO] 10.244.0.3:34734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095529s
	[INFO] 10.244.0.3:35072 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005919s
	[INFO] 10.244.1.2:55347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122466s
	[INFO] 10.244.1.2:41996 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104539s
	[INFO] 10.244.1.2:38111 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081724s
	[INFO] 10.244.1.2:38405 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056908s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-985452
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985452
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=multinode-985452
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_08T18_30_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:30:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985452
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Dec 2023 18:31:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:31:03 +0000   Fri, 08 Dec 2023 18:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:31:03 +0000   Fri, 08 Dec 2023 18:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:31:03 +0000   Fri, 08 Dec 2023 18:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:31:03 +0000   Fri, 08 Dec 2023 18:31:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-985452
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 b470cc2872ea42919628bb1375b0eadb
	  System UUID:                f97eefa1-ea5f-4aa1-b349-8205d38c79fb
	  Boot ID:                    fbb3830a-6e88-496f-844f-172e564c45c3
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-mb9gz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-q28mc                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     55s
	  kube-system                 etcd-multinode-985452                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kindnet-nfbjn                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      55s
	  kube-system                 kube-apiserver-multinode-985452             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-controller-manager-multinode-985452    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-proxy-wf8gr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-scheduler-multinode-985452             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-985452 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-985452 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x8 over 74s)  kubelet          Node multinode-985452 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node multinode-985452 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node multinode-985452 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s                kubelet          Node multinode-985452 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node multinode-985452 event: Registered Node multinode-985452 in Controller
	  Normal  NodeReady                24s                kubelet          Node multinode-985452 status is now: NodeReady
	
	
	Name:               multinode-985452-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985452-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4117b3e3d296a64e59281c5525848e6479e0626b
	                    minikube.k8s.io/name=multinode-985452
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_08T18_31_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Dec 2023 18:31:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-985452-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Dec 2023 18:31:19 +0000   Fri, 08 Dec 2023 18:31:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Dec 2023 18:31:19 +0000   Fri, 08 Dec 2023 18:31:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Dec 2023 18:31:19 +0000   Fri, 08 Dec 2023 18:31:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Dec 2023 18:31:19 +0000   Fri, 08 Dec 2023 18:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-985452-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 95ca9550fd8045f8b3b5d7e3354301bf
	  System UUID:                7da2d7de-647f-4076-8c5b-b1b05329bc9a
	  Boot ID:                    fbb3830a-6e88-496f-844f-172e564c45c3
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wwj6s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-bxvqd               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-ndp9r            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 8s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 11s)  kubelet          Node multinode-985452-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 11s)  kubelet          Node multinode-985452-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 11s)  kubelet          Node multinode-985452-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8s                kubelet          Node multinode-985452-m02 status is now: NodeReady
	  Normal  RegisteredNode           6s                node-controller  Node multinode-985452-m02 event: Registered Node multinode-985452-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.007355] FS-Cache: O-key=[8] 'b9a20f0200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006690] FS-Cache: N-cookie d=0000000059c528da{9p.inode} n=0000000061bf7b75
	[  +0.008747] FS-Cache: N-key=[8] 'b9a20f0200000000'
	[  +4.078898] FS-Cache: Duplicate cookie detected
	[  +0.004678] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006765] FS-Cache: O-cookie d=00000000a0c9b1c7{9P.session} n=00000000c84b6137
	[  +0.007522] FS-Cache: O-key=[10] '34323936373230333034'
	[  +0.005375] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006569] FS-Cache: N-cookie d=00000000a0c9b1c7{9P.session} n=00000000bc9fd172
	[  +0.008904] FS-Cache: N-key=[10] '34323936373230333034'
	[Dec 8 18:22] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +1.023718] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +2.015782] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +4.127562] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[ +16.126373] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	[Dec 8 18:23] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 32 e8 70 0b 48 24 fe 14 5e e7 07 23 08 00
	
	* 
	* ==> etcd [8292d3854f35429df84583f453222c123a9b7395ac0a2cc6687e112f74f5e94b] <==
	* {"level":"info","ts":"2023-12-08T18:30:14.420303Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-08T18:30:14.421746Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-08T18:30:14.421798Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-08T18:30:14.421879Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-08T18:30:14.422048Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-08T18:30:14.42212Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-08T18:30:14.450444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-08T18:30:14.450583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-08T18:30:14.450641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-08T18:30:14.450691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-08T18:30:14.450722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-08T18:30:14.450757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-08T18:30:14.450789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-08T18:30:14.451599Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:30:14.452241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-08T18:30:14.452449Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-08T18:30:14.452527Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-08T18:30:14.452577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:30:14.452655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:30:14.452684Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-08T18:30:14.452239Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-985452 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-08T18:30:14.452265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-08T18:30:14.453197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-08T18:30:14.453862Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-08T18:31:10.206639Z","caller":"traceutil/trace.go:171","msg":"trace[207490837] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"218.304843ms","start":"2023-12-08T18:31:09.988317Z","end":"2023-12-08T18:31:10.206622Z","steps":["trace[207490837] 'process raft request'  (duration: 218.203239ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:31:27 up  2:13,  0 users,  load average: 0.96, 1.04, 0.81
	Linux multinode-985452 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [de6d0ad655dc5022be68f84685358c600dff5af0308cabd33fab14f4f3ad8908] <==
	* I1208 18:30:33.126286       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1208 18:30:33.126346       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1208 18:30:33.218676       1 main.go:116] setting mtu 1500 for CNI 
	I1208 18:30:33.218706       1 main.go:146] kindnetd IP family: "ipv4"
	I1208 18:30:33.218728       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1208 18:31:03.355735       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1208 18:31:03.364432       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1208 18:31:03.364458       1 main.go:227] handling current node
	I1208 18:31:13.370878       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1208 18:31:13.370903       1 main.go:227] handling current node
	I1208 18:31:23.382596       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1208 18:31:23.382623       1 main.go:227] handling current node
	I1208 18:31:23.382632       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1208 18:31:23.382637       1 main.go:250] Node multinode-985452-m02 has CIDR [10.244.1.0/24] 
	I1208 18:31:23.382789       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [832d650180c47868224a6a49d711d5e35f85f3ef6632743de4b9a3d1759804a6] <==
	* I1208 18:30:16.719483       1 aggregator.go:166] initial CRD sync complete...
	I1208 18:30:16.719503       1 autoregister_controller.go:141] Starting autoregister controller
	I1208 18:30:16.719512       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 18:30:16.719521       1 cache.go:39] Caches are synced for autoregister controller
	I1208 18:30:16.720594       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1208 18:30:16.720667       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1208 18:30:16.722077       1 controller.go:624] quota admission added evaluator for: namespaces
	I1208 18:30:16.723447       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1208 18:30:16.733167       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 18:30:17.489726       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1208 18:30:17.493232       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1208 18:30:17.493247       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 18:30:17.862121       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 18:30:17.892600       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 18:30:17.940019       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1208 18:30:17.945375       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1208 18:30:17.946425       1 controller.go:624] quota admission added evaluator for: endpoints
	I1208 18:30:17.951738       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 18:30:18.855392       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1208 18:30:19.673670       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1208 18:30:19.683422       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1208 18:30:19.694207       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1208 18:30:32.626277       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1208 18:30:32.626278       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1208 18:30:32.630086       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [887f601860b3030faf6f77a0ea607f1758d193aefa19c225a00f021527f5d961] <==
	* I1208 18:31:03.516666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.55µs"
	I1208 18:31:03.534301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.179µs"
	I1208 18:31:04.926847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.383661ms"
	I1208 18:31:04.926961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.433µs"
	I1208 18:31:06.827281       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1208 18:31:06.827613       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-q28mc" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-q28mc"
	I1208 18:31:06.827639       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1208 18:31:18.252172       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985452-m02\" does not exist"
	I1208 18:31:18.257913       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-985452-m02" podCIDRs=["10.244.1.0/24"]
	I1208 18:31:18.265167       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ndp9r"
	I1208 18:31:18.265190       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bxvqd"
	I1208 18:31:19.897267       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-985452-m02"
	I1208 18:31:21.829622       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-985452-m02"
	I1208 18:31:21.829617       1 event.go:307] "Event occurred" object="multinode-985452-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-985452-m02 event: Registered Node multinode-985452-m02 in Controller"
	I1208 18:31:22.419406       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1208 18:31:22.426917       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wwj6s"
	I1208 18:31:22.432965       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-mb9gz"
	I1208 18:31:22.438706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.319673ms"
	I1208 18:31:22.449678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.914918ms"
	I1208 18:31:22.449786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.929µs"
	I1208 18:31:22.449842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.942µs"
	I1208 18:31:23.796958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.977473ms"
	I1208 18:31:23.797043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.906µs"
	I1208 18:31:23.953242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.139709ms"
	I1208 18:31:23.953350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.366µs"
	
	* 
	* ==> kube-proxy [0a8f773c433ba7943f367ace7b0c2fdf884b69ea93ca59008cbb30552fb36b5a] <==
	* I1208 18:30:33.141907       1 server_others.go:69] "Using iptables proxy"
	I1208 18:30:33.150059       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1208 18:30:33.168420       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1208 18:30:33.170067       1 server_others.go:152] "Using iptables Proxier"
	I1208 18:30:33.170102       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1208 18:30:33.170113       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1208 18:30:33.170157       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1208 18:30:33.170798       1 server.go:846] "Version info" version="v1.28.4"
	I1208 18:30:33.170948       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 18:30:33.172154       1 config.go:315] "Starting node config controller"
	I1208 18:30:33.172232       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1208 18:30:33.172185       1 config.go:97] "Starting endpoint slice config controller"
	I1208 18:30:33.172307       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1208 18:30:33.172203       1 config.go:188] "Starting service config controller"
	I1208 18:30:33.172387       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1208 18:30:33.272677       1 shared_informer.go:318] Caches are synced for service config
	I1208 18:30:33.272696       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1208 18:30:33.272781       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f5eecf8bccaf03493323b5cafc895f56ef73f738a098f8171fc031980d8d220d] <==
	* W1208 18:30:16.726068       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1208 18:30:16.726089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1208 18:30:16.726086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1208 18:30:16.726095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1208 18:30:16.726104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1208 18:30:16.726153       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1208 18:30:16.726256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1208 18:30:16.726273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1208 18:30:16.726271       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1208 18:30:16.726289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1208 18:30:16.726289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1208 18:30:16.726296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 18:30:16.726307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1208 18:30:16.726317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 18:30:17.565943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1208 18:30:17.566074       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1208 18:30:17.568357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1208 18:30:17.568387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1208 18:30:17.585680       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1208 18:30:17.585709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1208 18:30:17.703562       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1208 18:30:17.703591       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1208 18:30:17.727219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1208 18:30:17.727254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1208 18:30:17.946798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723491    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1def7bb5-ed1e-47af-b6ba-4f4df25b5988-xtables-lock\") pod \"kindnet-nfbjn\" (UID: \"1def7bb5-ed1e-47af-b6ba-4f4df25b5988\") " pod="kube-system/kindnet-nfbjn"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723522    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcwqw\" (UniqueName: \"kubernetes.io/projected/1def7bb5-ed1e-47af-b6ba-4f4df25b5988-kube-api-access-hcwqw\") pod \"kindnet-nfbjn\" (UID: \"1def7bb5-ed1e-47af-b6ba-4f4df25b5988\") " pod="kube-system/kindnet-nfbjn"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723580    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5b56b5d-7c2d-4dd2-8152-59d68bf94428-lib-modules\") pod \"kube-proxy-wf8gr\" (UID: \"f5b56b5d-7c2d-4dd2-8152-59d68bf94428\") " pod="kube-system/kube-proxy-wf8gr"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723617    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxspn\" (UniqueName: \"kubernetes.io/projected/f5b56b5d-7c2d-4dd2-8152-59d68bf94428-kube-api-access-gxspn\") pod \"kube-proxy-wf8gr\" (UID: \"f5b56b5d-7c2d-4dd2-8152-59d68bf94428\") " pod="kube-system/kube-proxy-wf8gr"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723645    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1def7bb5-ed1e-47af-b6ba-4f4df25b5988-cni-cfg\") pod \"kindnet-nfbjn\" (UID: \"1def7bb5-ed1e-47af-b6ba-4f4df25b5988\") " pod="kube-system/kindnet-nfbjn"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723690    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5b56b5d-7c2d-4dd2-8152-59d68bf94428-xtables-lock\") pod \"kube-proxy-wf8gr\" (UID: \"f5b56b5d-7c2d-4dd2-8152-59d68bf94428\") " pod="kube-system/kube-proxy-wf8gr"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: I1208 18:30:32.723749    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1def7bb5-ed1e-47af-b6ba-4f4df25b5988-lib-modules\") pod \"kindnet-nfbjn\" (UID: \"1def7bb5-ed1e-47af-b6ba-4f4df25b5988\") " pod="kube-system/kindnet-nfbjn"
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: W1208 18:30:32.995498    1599 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio-54ee1a24d166a6fc3c66e80445c0bb6f76cbd5bd05f60b010c7117e0a1000ac5 WatchSource:0}: Error finding container 54ee1a24d166a6fc3c66e80445c0bb6f76cbd5bd05f60b010c7117e0a1000ac5: Status 404 returned error can't find the container with id 54ee1a24d166a6fc3c66e80445c0bb6f76cbd5bd05f60b010c7117e0a1000ac5
	Dec 08 18:30:32 multinode-985452 kubelet[1599]: W1208 18:30:32.995812    1599 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio-e329fa66f6c490cd808a351b19e08d31ac5ba7bd857b6cc36a176e8e690f4db7 WatchSource:0}: Error finding container e329fa66f6c490cd808a351b19e08d31ac5ba7bd857b6cc36a176e8e690f4db7: Status 404 returned error can't find the container with id e329fa66f6c490cd808a351b19e08d31ac5ba7bd857b6cc36a176e8e690f4db7
	Dec 08 18:30:33 multinode-985452 kubelet[1599]: I1208 18:30:33.861799    1599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wf8gr" podStartSLOduration=1.861750002 podCreationTimestamp="2023-12-08 18:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:30:33.852967421 +0000 UTC m=+14.202764307" watchObservedRunningTime="2023-12-08 18:30:33.861750002 +0000 UTC m=+14.211546884"
	Dec 08 18:30:33 multinode-985452 kubelet[1599]: I1208 18:30:33.861907    1599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-nfbjn" podStartSLOduration=1.8618824680000001 podCreationTimestamp="2023-12-08 18:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:30:33.861581285 +0000 UTC m=+14.211378171" watchObservedRunningTime="2023-12-08 18:30:33.861882468 +0000 UTC m=+14.211679353"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.493687    1599 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.515130    1599 topology_manager.go:215] "Topology Admit Handler" podUID="1eedf4a2-904b-41c1-997e-28f766fcddf3" podNamespace="kube-system" podName="storage-provisioner"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.516728    1599 topology_manager.go:215] "Topology Admit Handler" podUID="79df6371-4a56-4034-8e15-947b595ac5bb" podNamespace="kube-system" podName="coredns-5dd5756b68-q28mc"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.641636    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8br\" (UniqueName: \"kubernetes.io/projected/1eedf4a2-904b-41c1-997e-28f766fcddf3-kube-api-access-2r8br\") pod \"storage-provisioner\" (UID: \"1eedf4a2-904b-41c1-997e-28f766fcddf3\") " pod="kube-system/storage-provisioner"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.641683    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rjq\" (UniqueName: \"kubernetes.io/projected/79df6371-4a56-4034-8e15-947b595ac5bb-kube-api-access-s8rjq\") pod \"coredns-5dd5756b68-q28mc\" (UID: \"79df6371-4a56-4034-8e15-947b595ac5bb\") " pod="kube-system/coredns-5dd5756b68-q28mc"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.641706    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1eedf4a2-904b-41c1-997e-28f766fcddf3-tmp\") pod \"storage-provisioner\" (UID: \"1eedf4a2-904b-41c1-997e-28f766fcddf3\") " pod="kube-system/storage-provisioner"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: I1208 18:31:03.641733    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79df6371-4a56-4034-8e15-947b595ac5bb-config-volume\") pod \"coredns-5dd5756b68-q28mc\" (UID: \"79df6371-4a56-4034-8e15-947b595ac5bb\") " pod="kube-system/coredns-5dd5756b68-q28mc"
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: W1208 18:31:03.863361    1599 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio-d5ce8411d86bd0d8a879e114d2d619fe86840b30e2c0b63f0287f6e5162e0b62 WatchSource:0}: Error finding container d5ce8411d86bd0d8a879e114d2d619fe86840b30e2c0b63f0287f6e5162e0b62: Status 404 returned error can't find the container with id d5ce8411d86bd0d8a879e114d2d619fe86840b30e2c0b63f0287f6e5162e0b62
	Dec 08 18:31:03 multinode-985452 kubelet[1599]: W1208 18:31:03.863652    1599 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio-a71d87a20b1583fd53bd0cb961f04dda0eacfa4f6ec6b4c17103b02079fd6ef6 WatchSource:0}: Error finding container a71d87a20b1583fd53bd0cb961f04dda0eacfa4f6ec6b4c17103b02079fd6ef6: Status 404 returned error can't find the container with id a71d87a20b1583fd53bd0cb961f04dda0eacfa4f6ec6b4c17103b02079fd6ef6
	Dec 08 18:31:04 multinode-985452 kubelet[1599]: I1208 18:31:04.911387    1599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.91133619 podCreationTimestamp="2023-12-08 18:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:31:04.911163063 +0000 UTC m=+45.260959949" watchObservedRunningTime="2023-12-08 18:31:04.91133619 +0000 UTC m=+45.261133082"
	Dec 08 18:31:04 multinode-985452 kubelet[1599]: I1208 18:31:04.920349    1599 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q28mc" podStartSLOduration=32.920301833 podCreationTimestamp="2023-12-08 18:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-08 18:31:04.920165258 +0000 UTC m=+45.269962145" watchObservedRunningTime="2023-12-08 18:31:04.920301833 +0000 UTC m=+45.270098720"
	Dec 08 18:31:22 multinode-985452 kubelet[1599]: I1208 18:31:22.439268    1599 topology_manager.go:215] "Topology Admit Handler" podUID="24831e1c-e6cd-47db-afdf-87299e26dfb1" podNamespace="default" podName="busybox-5bc68d56bd-mb9gz"
	Dec 08 18:31:22 multinode-985452 kubelet[1599]: I1208 18:31:22.459281    1599 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6ll\" (UniqueName: \"kubernetes.io/projected/24831e1c-e6cd-47db-afdf-87299e26dfb1-kube-api-access-dw6ll\") pod \"busybox-5bc68d56bd-mb9gz\" (UID: \"24831e1c-e6cd-47db-afdf-87299e26dfb1\") " pod="default/busybox-5bc68d56bd-mb9gz"
	Dec 08 18:31:22 multinode-985452 kubelet[1599]: W1208 18:31:22.779342    1599 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio-83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858 WatchSource:0}: Error finding container 83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858: Status 404 returned error can't find the container with id 83310835ec3554f4ec822cb6e35aa727e73fa9e43bd7a844d6f8076909013858
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-985452 -n multinode-985452
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985452 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.976317277.exe start -p running-upgrade-189872 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.976317277.exe start -p running-upgrade-189872 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m27.052352734s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-189872 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-189872 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.580907764s)

                                                
                                                
-- stdout --
	* [running-upgrade-189872] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-189872 in cluster running-upgrade-189872
	* Pulling base image ...
	* Updating the running docker "running-upgrade-189872" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:42:40.713882  499449 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:42:40.714036  499449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:40.714042  499449 out.go:309] Setting ErrFile to fd 2...
	I1208 18:42:40.714048  499449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:40.714329  499449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:42:40.715048  499449 out.go:303] Setting JSON to false
	I1208 18:42:40.716552  499449 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8661,"bootTime":1702052300,"procs":468,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:42:40.716617  499449 start.go:138] virtualization: kvm guest
	I1208 18:42:40.719022  499449 out.go:177] * [running-upgrade-189872] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:42:40.720594  499449 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:42:40.720630  499449 notify.go:220] Checking for updates...
	I1208 18:42:40.722164  499449 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:42:40.724121  499449 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:42:40.729813  499449 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:42:40.731323  499449 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:42:40.733086  499449 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:42:40.735116  499449 config.go:182] Loaded profile config "running-upgrade-189872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:40.735147  499449 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:42:40.737259  499449 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1208 18:42:40.738696  499449 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:42:40.771958  499449 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:42:40.772088  499449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:42:40.883206  499449 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:93 SystemTime:2023-12-08 18:42:40.866257359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:42:40.883447  499449 docker.go:295] overlay module found
	I1208 18:42:40.885537  499449 out.go:177] * Using the docker driver based on existing profile
	I1208 18:42:40.887414  499449 start.go:298] selected driver: docker
	I1208 18:42:40.887454  499449 start.go:902] validating driver "docker" against &{Name:running-upgrade-189872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-189872 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1208 18:42:40.887597  499449 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:42:40.888743  499449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:42:40.965680  499449 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:89 SystemTime:2023-12-08 18:42:40.953979411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:42:40.966163  499449 cni.go:84] Creating CNI manager for ""
	I1208 18:42:40.966191  499449 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1208 18:42:40.966203  499449 start_flags.go:323] config:
	{Name:running-upgrade-189872 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-189872 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1208 18:42:40.968309  499449 out.go:177] * Starting control plane node running-upgrade-189872 in cluster running-upgrade-189872
	I1208 18:42:40.970213  499449 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:42:40.971663  499449 out.go:177] * Pulling base image ...
	I1208 18:42:40.973123  499449 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1208 18:42:40.973231  499449 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:42:40.998722  499449 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon, skipping pull
	I1208 18:42:40.998776  499449 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in daemon, skipping load
	W1208 18:42:41.033431  499449 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1208 18:42:41.033628  499449 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/running-upgrade-189872/config.json ...
	I1208 18:42:41.033921  499449 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:42:41.033976  499449 start.go:365] acquiring machines lock for running-upgrade-189872: {Name:mk3039af12b416b6a5248f6b165cbc16b654d6a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.034090  499449 start.go:369] acquired machines lock for "running-upgrade-189872" in 69.627µs
	I1208 18:42:41.034119  499449 start.go:96] Skipping create...Using existing machine configuration
	I1208 18:42:41.034132  499449 fix.go:54] fixHost starting: m01
	I1208 18:42:41.034418  499449 cli_runner.go:164] Run: docker container inspect running-upgrade-189872 --format={{.State.Status}}
	I1208 18:42:41.034626  499449 cache.go:107] acquiring lock: {Name:mkf79bf3759b550a09d3b466f54ec6eae8eaff52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.034644  499449 cache.go:107] acquiring lock: {Name:mkccfdc3d68c2c0b817bd9b86bcbe464964365a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.034681  499449 cache.go:107] acquiring lock: {Name:mk5eb2a850b24af07c22c71bf62f486065adca43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.034741  499449 cache.go:107] acquiring lock: {Name:mkecdcdf718ab5f0f58a059602d7fe23ad2d40f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.034853  499449 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1208 18:42:41.034861  499449 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1208 18:42:41.034863  499449 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1208 18:42:41.035091  499449 cache.go:107] acquiring lock: {Name:mk8a026858f894316661ab25756c4be1ddfbbb11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.035179  499449 cache.go:107] acquiring lock: {Name:mkd2402d289532e4add3583a2dca5b01ecd29cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.035263  499449 cache.go:107] acquiring lock: {Name:mk2a7f8ec108dc105e029ad847509ad25ee9592a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.035308  499449 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1208 18:42:41.035351  499449 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1208 18:42:41.035198  499449 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1208 18:42:41.034707  499449 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 18:42:41.035487  499449 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 868.747µs
	I1208 18:42:41.035505  499449 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 18:42:41.036194  499449 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1208 18:42:41.036336  499449 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1208 18:42:41.036502  499449 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1208 18:42:41.036643  499449 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1208 18:42:41.036857  499449 cache.go:107] acquiring lock: {Name:mk84fd1ba3fde67f3acafb73bddb9bd783dff6e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:41.036996  499449 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1208 18:42:41.036795  499449 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1208 18:42:41.037303  499449 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1208 18:42:41.037632  499449 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1208 18:42:41.063013  499449 fix.go:102] recreateIfNeeded on running-upgrade-189872: state=Running err=<nil>
	W1208 18:42:41.063046  499449 fix.go:128] unexpected machine state, will restart: <nil>
	I1208 18:42:41.065615  499449 out.go:177] * Updating the running docker "running-upgrade-189872" container ...
	I1208 18:42:41.067081  499449 machine.go:88] provisioning docker machine ...
	I1208 18:42:41.067127  499449 ubuntu.go:169] provisioning hostname "running-upgrade-189872"
	I1208 18:42:41.067193  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:41.087783  499449 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:41.088322  499449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33227 <nil> <nil>}
	I1208 18:42:41.088342  499449 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-189872 && echo "running-upgrade-189872" | sudo tee /etc/hostname
	I1208 18:42:41.231796  499449 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-189872
	
	I1208 18:42:41.231897  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:41.264997  499449 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:41.265477  499449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33227 <nil> <nil>}
	I1208 18:42:41.265518  499449 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-189872' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-189872/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-189872' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:42:41.274190  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1208 18:42:41.304327  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1208 18:42:41.307409  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1208 18:42:41.311016  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1208 18:42:41.341811  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1208 18:42:41.358160  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1208 18:42:41.358193  499449 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 323.529148ms
	I1208 18:42:41.358210  499449 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1208 18:42:41.377193  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1208 18:42:41.384147  499449 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:42:41.384178  499449 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:42:41.384205  499449 ubuntu.go:177] setting up certificates
	I1208 18:42:41.384223  499449 provision.go:83] configureAuth start
	I1208 18:42:41.384287  499449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-189872
	I1208 18:42:41.404598  499449 cache.go:162] opening:  /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1208 18:42:41.408052  499449 provision.go:138] copyHostCerts
	I1208 18:42:41.408163  499449 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem, removing ...
	I1208 18:42:41.408180  499449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:42:41.408256  499449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:42:41.408545  499449 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem, removing ...
	I1208 18:42:41.408620  499449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:42:41.408703  499449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:42:41.408818  499449 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem, removing ...
	I1208 18:42:41.408833  499449 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:42:41.408904  499449 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:42:41.409099  499449 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-189872 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-189872]
	I1208 18:42:41.718709  499449 provision.go:172] copyRemoteCerts
	I1208 18:42:41.718847  499449 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:42:41.718910  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:41.770567  499449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33227 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/running-upgrade-189872/id_rsa Username:docker}
	I1208 18:42:41.903794  499449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 18:42:41.941524  499449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 18:42:41.979960  499449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:42:42.002827  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1208 18:42:42.002861  499449 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 967.777587ms
	I1208 18:42:42.002878  499449 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1208 18:42:42.008582  499449 provision.go:86] duration metric: configureAuth took 624.337053ms
	I1208 18:42:42.008611  499449 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:42:42.008783  499449 config.go:182] Loaded profile config "running-upgrade-189872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:42.008882  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:42.034669  499449 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:42.035215  499449 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33227 <nil> <nil>}
	I1208 18:42:42.035235  499449 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:42:42.290763  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1208 18:42:42.290800  499449 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.255539574s
	I1208 18:42:42.290825  499449 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1208 18:42:42.328086  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1208 18:42:42.328120  499449 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.29349954s
	I1208 18:42:42.328136  499449 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1208 18:42:42.484348  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1208 18:42:42.484443  499449 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.447593249s
	I1208 18:42:42.484472  499449 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1208 18:42:42.681041  499449 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:42:42.681071  499449 machine.go:91] provisioned docker machine in 1.613969065s
	I1208 18:42:42.681083  499449 start.go:300] post-start starting for "running-upgrade-189872" (driver="docker")
	I1208 18:42:42.681097  499449 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:42:42.681163  499449 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:42:42.681212  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:42.728574  499449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33227 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/running-upgrade-189872/id_rsa Username:docker}
	I1208 18:42:42.846677  499449 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:42:42.850195  499449 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:42:42.850221  499449 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:42:42.850234  499449 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:42:42.850243  499449 info.go:137] Remote host: Ubuntu 19.10
	I1208 18:42:42.850254  499449 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:42:42.850309  499449 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:42:42.850396  499449 filesync.go:149] local asset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> 3436282.pem in /etc/ssl/certs
	I1208 18:42:42.850513  499449 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 18:42:42.859949  499449 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:42:42.887833  499449 start.go:303] post-start completed in 206.731137ms
	I1208 18:42:42.887926  499449 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:42:42.887985  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:42.909310  499449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33227 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/running-upgrade-189872/id_rsa Username:docker}
	I1208 18:42:42.999360  499449 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:42:43.003538  499449 fix.go:56] fixHost completed within 1.969397163s
	I1208 18:42:43.003560  499449 start.go:83] releasing machines lock for "running-upgrade-189872", held for 1.969452878s
	I1208 18:42:43.003641  499449 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-189872
	I1208 18:42:43.034908  499449 ssh_runner.go:195] Run: cat /version.json
	I1208 18:42:43.034967  499449 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:42:43.034984  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:43.035049  499449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-189872
	I1208 18:42:43.060672  499449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33227 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/running-upgrade-189872/id_rsa Username:docker}
	I1208 18:42:43.075024  499449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33227 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/running-upgrade-189872/id_rsa Username:docker}
	W1208 18:42:43.145844  499449 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1208 18:42:43.160248  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1208 18:42:43.160321  499449 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.125581233s
	I1208 18:42:43.160348  499449 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1208 18:42:43.385744  499449 cache.go:157] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1208 18:42:43.385778  499449 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.350606135s
	I1208 18:42:43.385791  499449 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1208 18:42:43.385810  499449 cache.go:87] Successfully saved all images to host disk.
	I1208 18:42:43.385863  499449 ssh_runner.go:195] Run: systemctl --version
	I1208 18:42:43.390432  499449 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:42:43.451776  499449 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:42:43.456000  499449 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:42:43.531704  499449 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:42:43.531791  499449 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:42:43.652503  499449 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 18:42:43.652529  499449 start.go:475] detecting cgroup driver to use...
	I1208 18:42:43.652566  499449 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:42:43.652616  499449 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:42:43.682839  499449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:42:43.693501  499449 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:42:43.693566  499449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:42:43.704398  499449 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:42:43.715663  499449 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1208 18:42:43.726438  499449 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1208 18:42:43.726515  499449 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:42:43.895587  499449 docker.go:219] disabling docker service ...
	I1208 18:42:43.895657  499449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:42:43.906708  499449 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:42:43.917889  499449 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:42:44.001928  499449 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:42:44.106373  499449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:42:44.117370  499449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:42:44.152950  499449 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1208 18:42:44.153016  499449 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:42:44.166351  499449 out.go:177] 
	W1208 18:42:44.167753  499449 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1208 18:42:44.167779  499449 out.go:239] * 
	* 
	W1208 18:42:44.169136  499449 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 18:42:44.170589  499449 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-189872 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-08 18:42:44.202983185 +0000 UTC m=+1956.843178489
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-189872
helpers_test.go:235: (dbg) docker inspect running-upgrade-189872:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99",
	        "Created": "2023-12-08T18:41:31.894067956Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-08T18:41:32.386301086Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99/hostname",
	        "HostsPath": "/var/lib/docker/containers/d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99/hosts",
	        "LogPath": "/var/lib/docker/containers/d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99/d212930257b8ad90a1f4845e0e1ca021ed28471c01dd978832f47e7e7153ee99-json.log",
	        "Name": "/running-upgrade-189872",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-189872:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/be778aa6132e0c64e035af82d3b4552cb0e258ccc7f9be83999569cc4c4ae6da-init/diff:/var/lib/docker/overlay2/92fb9be3142c34011d80b29c42deaf6eecf48e30f91e9198cfc483c15d61e058/diff:/var/lib/docker/overlay2/1ba7879629b6c8fc6e07b1ed35a26d924da250d38c91fe6c694f8bf3ddd40b92/diff:/var/lib/docker/overlay2/8f4f1ef5867d02bcff905bdcbc9860a6d8513b4e8aece4085fabb831b029aa1d/diff:/var/lib/docker/overlay2/c86af244bcdafb0f929a79c2e5f5026df54a9a39af7f706a9353f5f1bcc6fe3e/diff:/var/lib/docker/overlay2/740bae1b5f0daaf92893d4858013a938c218d35999d65a2485fa5bbf529c4e51/diff:/var/lib/docker/overlay2/aab81020af068acbd86835712398641e4e8dfa46b070266528e1eb5c5953acd4/diff:/var/lib/docker/overlay2/7816d2bcca620fdb9b8709021f17a3a0afd2e831571b6b02f8fd6a4e696db1a9/diff:/var/lib/docker/overlay2/fa98e772fd43bca0e160f5bbe1dd6777d5efa72d9479abcc8565d9227bfc2b6f/diff:/var/lib/docker/overlay2/f99b7a6a9f345e46452d36e4ab6d8d5c7b5f979aa41a8fdb41bf459701198352/diff:/var/lib/docker/overlay2/cb623b
65d906429cff7623b03357af6169a9b17bd3306716ae6e3c1c8eab219a/diff:/var/lib/docker/overlay2/a127e234abd9d928b65e2385923c9613c27293c3ffb7bd9b6074c88036f06953/diff:/var/lib/docker/overlay2/cc4c82575d5b11c8589aa5fd64ecb7a5c2f8ca58298c6a7e84ec1d1795153b2b/diff:/var/lib/docker/overlay2/835e3edf7d16b54e339f9fa5ca846f34e85a6f891742f2e87f0b16c975f27a58/diff:/var/lib/docker/overlay2/439cb914e045c4522716411c94d828e33cbbab015ddc9928b8be1b026e282eea/diff:/var/lib/docker/overlay2/c1fed74972a28a0daf4a9c4a6f3877d2434ffced257eee92b9643eb2e7df8846/diff:/var/lib/docker/overlay2/83b64ea4b6ff8d81c915cdb98af5224ea0b13992e5a4200f74d22c9a01931e6b/diff:/var/lib/docker/overlay2/a50b5b3d0411591997d96448365bbd825be4f61a08feedc392af905605ae5988/diff:/var/lib/docker/overlay2/ac2fcbbccee558daa58c9062eff9ac948e30e1990053534c225746dd2921ff5a/diff:/var/lib/docker/overlay2/4205c8fdb88a6e10f55e0d3ac17c276d32feed2558bb775c22ff7f9f5f5ad832/diff:/var/lib/docker/overlay2/abfc38c7402d456567e61175084dc3978c13b52130c94fccccb589265fbf7ad9/diff:/var/lib/d
ocker/overlay2/1ad80a506b97e75b54cecd04c197c609032ff3994cbbc8bee4946e6dca0d0f73/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be778aa6132e0c64e035af82d3b4552cb0e258ccc7f9be83999569cc4c4ae6da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be778aa6132e0c64e035af82d3b4552cb0e258ccc7f9be83999569cc4c4ae6da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be778aa6132e0c64e035af82d3b4552cb0e258ccc7f9be83999569cc4c4ae6da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-189872",
	                "Source": "/var/lib/docker/volumes/running-upgrade-189872/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-189872",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-189872",
	                "name.minikube.sigs.k8s.io": "running-upgrade-189872",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b0d9c4bced254ed2986faac90fbd35f08e9045074fba8cbe10173aea95fa323d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b0d9c4bced25",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "64b144259c52d490ee474a75d6473fb8152742a5ca39291219b367f6765b9412",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "ee2b366eba027903a4432da2dbe638682b04da982488bb972dca8e0222ac3a0c",
	                    "EndpointID": "64b144259c52d490ee474a75d6473fb8152742a5ca39291219b367f6765b9412",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-189872 -n running-upgrade-189872
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-189872 -n running-upgrade-189872: exit status 4 (415.757242ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 18:42:44.619775  501118 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-189872" does not appear in /home/jenkins/minikube-integration/17738-336823/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-189872" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-189872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-189872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-189872: (1.953423364s)
--- FAIL: TestRunningBinaryUpgrade (93.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3865679741.exe start -p stopped-upgrade-897546 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3865679741.exe start -p stopped-upgrade-897546 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m28.580916557s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3865679741.exe -p stopped-upgrade-897546 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3865679741.exe -p stopped-upgrade-897546 stop: (2.054186876s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-897546 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-897546 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.241513606s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-897546] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-897546 in cluster stopped-upgrade-897546
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-897546" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:42:44.231961  501074 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:42:44.232109  501074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:44.232120  501074 out.go:309] Setting ErrFile to fd 2...
	I1208 18:42:44.232124  501074 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:44.232341  501074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:42:44.232966  501074 out.go:303] Setting JSON to false
	I1208 18:42:44.234395  501074 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8664,"bootTime":1702052300,"procs":464,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:42:44.234498  501074 start.go:138] virtualization: kvm guest
	I1208 18:42:44.236442  501074 out.go:177] * [stopped-upgrade-897546] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:42:44.238577  501074 notify.go:220] Checking for updates...
	I1208 18:42:44.239889  501074 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:42:44.241738  501074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:42:44.245388  501074 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:42:44.247203  501074 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:42:44.251242  501074 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:42:44.252754  501074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:42:44.255690  501074 config.go:182] Loaded profile config "stopped-upgrade-897546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:44.255818  501074 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0
	I1208 18:42:44.258043  501074 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1208 18:42:44.259759  501074 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:42:44.292695  501074 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:42:44.292809  501074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:42:44.359806  501074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:52 SystemTime:2023-12-08 18:42:44.348309194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:42:44.359968  501074 docker.go:295] overlay module found
	I1208 18:42:44.362096  501074 out.go:177] * Using the docker driver based on existing profile
	I1208 18:42:44.363502  501074 start.go:298] selected driver: docker
	I1208 18:42:44.363517  501074 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-897546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-897546 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1208 18:42:44.363610  501074 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:42:44.364770  501074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:42:44.462223  501074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:52 SystemTime:2023-12-08 18:42:44.452510669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:42:44.462612  501074 cni.go:84] Creating CNI manager for ""
	I1208 18:42:44.462642  501074 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1208 18:42:44.462656  501074 start_flags.go:323] config:
	{Name:stopped-upgrade-897546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-897546 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1208 18:42:44.464588  501074 out.go:177] * Starting control plane node stopped-upgrade-897546 in cluster stopped-upgrade-897546
	I1208 18:42:44.465837  501074 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:42:44.467300  501074 out.go:177] * Pulling base image ...
	I1208 18:42:44.468437  501074 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1208 18:42:44.468475  501074 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:42:44.488203  501074 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon, skipping pull
	I1208 18:42:44.488246  501074 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in daemon, skipping load
	W1208 18:42:44.506366  501074 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1208 18:42:44.506639  501074 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/stopped-upgrade-897546/config.json ...
	I1208 18:42:44.506648  501074 cache.go:107] acquiring lock: {Name:mk84fd1ba3fde67f3acafb73bddb9bd783dff6e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506763  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1208 18:42:44.506775  501074 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 155.608µs
	I1208 18:42:44.506652  501074 cache.go:107] acquiring lock: {Name:mkf79bf3759b550a09d3b466f54ec6eae8eaff52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506806  501074 cache.go:107] acquiring lock: {Name:mk2a7f8ec108dc105e029ad847509ad25ee9592a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506809  501074 cache.go:107] acquiring lock: {Name:mk5eb2a850b24af07c22c71bf62f486065adca43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506658  501074 cache.go:107] acquiring lock: {Name:mkd2402d289532e4add3583a2dca5b01ecd29cac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506842  501074 cache.go:107] acquiring lock: {Name:mk8a026858f894316661ab25756c4be1ddfbbb11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506875  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 18:42:44.506878  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1208 18:42:44.506885  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1208 18:42:44.506889  501074 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 261.735µs
	I1208 18:42:44.506895  501074 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 88.012µs
	I1208 18:42:44.506902  501074 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 18:42:44.506786  501074 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1208 18:42:44.506909  501074 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 100.503µs
	I1208 18:42:44.506922  501074 cache.go:194] Successfully downloaded all kic artifacts
	I1208 18:42:44.506927  501074 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1208 18:42:44.506913  501074 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1208 18:42:44.506951  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1208 18:42:44.506948  501074 start.go:365] acquiring machines lock for stopped-upgrade-897546: {Name:mk538000b6442500c44fd4f6194bd609727e5885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.506951  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1208 18:42:44.506959  501074 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 321.445µs
	I1208 18:42:44.506969  501074 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1208 18:42:44.506966  501074 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 152.88µs
	I1208 18:42:44.506983  501074 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1208 18:42:44.507025  501074 start.go:369] acquired machines lock for "stopped-upgrade-897546" in 61.406µs
	I1208 18:42:44.507045  501074 start.go:96] Skipping create...Using existing machine configuration
	I1208 18:42:44.507058  501074 fix.go:54] fixHost starting: m01
	I1208 18:42:44.507064  501074 cache.go:107] acquiring lock: {Name:mkccfdc3d68c2c0b817bd9b86bcbe464964365a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.507174  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1208 18:42:44.507190  501074 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 159.811µs
	I1208 18:42:44.507199  501074 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1208 18:42:44.507250  501074 cache.go:107] acquiring lock: {Name:mkecdcdf718ab5f0f58a059602d7fe23ad2d40f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 18:42:44.507342  501074 cache.go:115] /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1208 18:42:44.507350  501074 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 134.676µs
	I1208 18:42:44.507366  501074 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1208 18:42:44.507373  501074 cache.go:87] Successfully saved all images to host disk.
	I1208 18:42:44.507399  501074 cli_runner.go:164] Run: docker container inspect stopped-upgrade-897546 --format={{.State.Status}}
	I1208 18:42:44.526943  501074 fix.go:102] recreateIfNeeded on stopped-upgrade-897546: state=Stopped err=<nil>
	W1208 18:42:44.527004  501074 fix.go:128] unexpected machine state, will restart: <nil>
	I1208 18:42:44.528686  501074 out.go:177] * Restarting existing docker container for "stopped-upgrade-897546" ...
	I1208 18:42:44.529957  501074 cli_runner.go:164] Run: docker start stopped-upgrade-897546
	I1208 18:42:44.841031  501074 cli_runner.go:164] Run: docker container inspect stopped-upgrade-897546 --format={{.State.Status}}
	I1208 18:42:44.859709  501074 kic.go:430] container "stopped-upgrade-897546" state is running.
	I1208 18:42:44.860083  501074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-897546
	I1208 18:42:44.884030  501074 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/stopped-upgrade-897546/config.json ...
	I1208 18:42:44.884342  501074 machine.go:88] provisioning docker machine ...
	I1208 18:42:44.884381  501074 ubuntu.go:169] provisioning hostname "stopped-upgrade-897546"
	I1208 18:42:44.884451  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:44.910588  501074 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:44.911130  501074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1208 18:42:44.911156  501074 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-897546 && echo "stopped-upgrade-897546" | sudo tee /etc/hostname
	I1208 18:42:44.911853  501074 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33250->127.0.0.1:33248: read: connection reset by peer
	I1208 18:42:48.039232  501074 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-897546
	
	I1208 18:42:48.039303  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:48.059680  501074 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:48.060042  501074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1208 18:42:48.060067  501074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-897546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-897546/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-897546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 18:42:48.174406  501074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1208 18:42:48.174435  501074 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17738-336823/.minikube CaCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17738-336823/.minikube}
	I1208 18:42:48.174539  501074 ubuntu.go:177] setting up certificates
	I1208 18:42:48.174553  501074 provision.go:83] configureAuth start
	I1208 18:42:48.174616  501074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-897546
	I1208 18:42:48.196158  501074 provision.go:138] copyHostCerts
	I1208 18:42:48.196231  501074 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem, removing ...
	I1208 18:42:48.196256  501074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem
	I1208 18:42:48.196347  501074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/ca.pem (1082 bytes)
	I1208 18:42:48.196458  501074 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem, removing ...
	I1208 18:42:48.196470  501074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem
	I1208 18:42:48.196501  501074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/cert.pem (1123 bytes)
	I1208 18:42:48.196570  501074 exec_runner.go:144] found /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem, removing ...
	I1208 18:42:48.196579  501074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem
	I1208 18:42:48.196609  501074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17738-336823/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17738-336823/.minikube/key.pem (1679 bytes)
	I1208 18:42:48.196665  501074 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-897546 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-897546]
	I1208 18:42:48.514387  501074 provision.go:172] copyRemoteCerts
	I1208 18:42:48.514475  501074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 18:42:48.514521  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:48.537139  501074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/stopped-upgrade-897546/id_rsa Username:docker}
	I1208 18:42:48.622136  501074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1208 18:42:48.644205  501074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 18:42:48.668187  501074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 18:42:48.685624  501074 provision.go:86] duration metric: configureAuth took 511.056707ms
	I1208 18:42:48.685659  501074 ubuntu.go:193] setting minikube options for container-runtime
	I1208 18:42:48.685885  501074 config.go:182] Loaded profile config "stopped-upgrade-897546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:48.686008  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:48.704806  501074 main.go:141] libmachine: Using SSH client type: native
	I1208 18:42:48.705315  501074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 127.0.0.1 33248 <nil> <nil>}
	I1208 18:42:48.705345  501074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 18:42:49.428678  501074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 18:42:49.428701  501074 machine.go:91] provisioned docker machine in 4.544342253s
	I1208 18:42:49.428712  501074 start.go:300] post-start starting for "stopped-upgrade-897546" (driver="docker")
	I1208 18:42:49.428757  501074 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 18:42:49.428818  501074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 18:42:49.428863  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:49.450102  501074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/stopped-upgrade-897546/id_rsa Username:docker}
	I1208 18:42:49.534869  501074 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 18:42:49.538075  501074 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1208 18:42:49.538103  501074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 18:42:49.538116  501074 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1208 18:42:49.538124  501074 info.go:137] Remote host: Ubuntu 19.10
	I1208 18:42:49.538133  501074 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/addons for local assets ...
	I1208 18:42:49.538181  501074 filesync.go:126] Scanning /home/jenkins/minikube-integration/17738-336823/.minikube/files for local assets ...
	I1208 18:42:49.538248  501074 filesync.go:149] local asset: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem -> 3436282.pem in /etc/ssl/certs
	I1208 18:42:49.538345  501074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 18:42:49.545895  501074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/ssl/certs/3436282.pem --> /etc/ssl/certs/3436282.pem (1708 bytes)
	I1208 18:42:49.605574  501074 start.go:303] post-start completed in 176.846537ms
	I1208 18:42:49.605658  501074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:42:49.605700  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:49.653361  501074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/stopped-upgrade-897546/id_rsa Username:docker}
	I1208 18:42:49.734810  501074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 18:42:49.738609  501074 fix.go:56] fixHost completed within 5.231545s
	I1208 18:42:49.738641  501074 start.go:83] releasing machines lock for "stopped-upgrade-897546", held for 5.231603653s
	I1208 18:42:49.738722  501074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-897546
	I1208 18:42:49.759531  501074 ssh_runner.go:195] Run: cat /version.json
	I1208 18:42:49.759575  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:49.759647  501074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 18:42:49.759722  501074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-897546
	I1208 18:42:49.777839  501074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/stopped-upgrade-897546/id_rsa Username:docker}
	I1208 18:42:49.781378  501074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33248 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/stopped-upgrade-897546/id_rsa Username:docker}
	W1208 18:42:49.898805  501074 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1208 18:42:49.898881  501074 ssh_runner.go:195] Run: systemctl --version
	I1208 18:42:49.902824  501074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 18:42:49.964818  501074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 18:42:49.969488  501074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:42:49.985025  501074 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1208 18:42:49.985100  501074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 18:42:50.017691  501074 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 18:42:50.017717  501074 start.go:475] detecting cgroup driver to use...
	I1208 18:42:50.017755  501074 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1208 18:42:50.017819  501074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 18:42:50.045222  501074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 18:42:50.057993  501074 docker.go:203] disabling cri-docker service (if available) ...
	I1208 18:42:50.058059  501074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 18:42:50.068845  501074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 18:42:50.080062  501074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1208 18:42:50.090294  501074 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1208 18:42:50.090347  501074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 18:42:50.156188  501074 docker.go:219] disabling docker service ...
	I1208 18:42:50.156237  501074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 18:42:50.166528  501074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 18:42:50.177298  501074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 18:42:50.247612  501074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 18:42:50.322792  501074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 18:42:50.334115  501074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 18:42:50.347994  501074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1208 18:42:50.348073  501074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 18:42:50.359332  501074 out.go:177] 
	W1208 18:42:50.360701  501074 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1208 18:42:50.360724  501074 out.go:239] * 
	* 
	W1208 18:42:50.361975  501074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 18:42:50.363477  501074 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-897546 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (96.88s)

                                                
                                    

Test pass (282/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.72
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 5.77
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.1/json-events 6.61
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.21
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
25 TestDownloadOnlyKic 1.3
26 TestBinaryMirror 0.73
27 TestOffline 83.18
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 151.59
34 TestAddons/parallel/Registry 14.11
37 TestAddons/parallel/MetricsServer 5.65
38 TestAddons/parallel/HelmTiller 9.81
40 TestAddons/parallel/CSI 39.94
41 TestAddons/parallel/Headlamp 12.07
42 TestAddons/parallel/CloudSpanner 5.92
43 TestAddons/parallel/LocalPath 56.75
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
47 TestAddons/serial/GCPAuth/Namespaces 0.12
48 TestAddons/StoppedEnableDisable 12.21
49 TestCertOptions 26.01
50 TestCertExpiration 230.1
52 TestForceSystemdFlag 30.29
53 TestForceSystemdEnv 29.72
55 TestKVMDriverInstallOrUpdate 3.01
59 TestErrorSpam/setup 24.74
60 TestErrorSpam/start 0.63
61 TestErrorSpam/status 0.9
62 TestErrorSpam/pause 1.53
63 TestErrorSpam/unpause 1.5
64 TestErrorSpam/stop 1.41
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.42
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 32.6
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
76 TestFunctional/serial/CacheCmd/cache/add_local 1.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 38.79
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.36
87 TestFunctional/serial/LogsFileCmd 1.37
88 TestFunctional/serial/InvalidService 4.6
90 TestFunctional/parallel/ConfigCmd 0.47
91 TestFunctional/parallel/DashboardCmd 14.68
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.27
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 9.24
99 TestFunctional/parallel/AddonsCmd 0.18
100 TestFunctional/parallel/PersistentVolumeClaim 34.87
102 TestFunctional/parallel/SSHCmd 0.77
103 TestFunctional/parallel/CpCmd 1.34
104 TestFunctional/parallel/MySQL 21.23
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 2.25
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
114 TestFunctional/parallel/License 0.2
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.65
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.57
122 TestFunctional/parallel/ImageCommands/Setup 0.95
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
126 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.31
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.37
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.12
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.46
135 TestFunctional/parallel/ServiceCmd/List 0.39
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
138 TestFunctional/parallel/ServiceCmd/Format 0.39
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ServiceCmd/URL 0.44
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
148 TestFunctional/parallel/ProfileCmd/profile_list 0.38
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
151 TestFunctional/parallel/MountCmd/any-port 7.07
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.13
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.91
154 TestFunctional/parallel/MountCmd/specific-port 2.58
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.13
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 72.15
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.41
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
169 TestJSONOutput/start/Command 69.08
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.67
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.59
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.71
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.23
194 TestKicCustomNetwork/create_custom_network 32.91
195 TestKicCustomNetwork/use_default_bridge_network 27.51
196 TestKicExistingNetwork 26.94
197 TestKicCustomSubnet 27.36
198 TestKicStaticIP 28.01
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 52.64
203 TestMountStart/serial/StartWithMountFirst 5.22
204 TestMountStart/serial/VerifyMountFirst 0.25
205 TestMountStart/serial/StartWithMountSecond 7.97
206 TestMountStart/serial/VerifyMountSecond 0.25
207 TestMountStart/serial/DeleteFirst 1.61
208 TestMountStart/serial/VerifyMountPostDelete 0.25
209 TestMountStart/serial/Stop 1.21
210 TestMountStart/serial/RestartStopped 6.97
211 TestMountStart/serial/VerifyMountPostStop 0.25
214 TestMultiNode/serial/FreshStart2Nodes 84.58
215 TestMultiNode/serial/DeployApp2Nodes 3.5
217 TestMultiNode/serial/AddNode 49.33
218 TestMultiNode/serial/MultiNodeLabels 0.06
219 TestMultiNode/serial/ProfileList 0.28
220 TestMultiNode/serial/CopyFile 9.26
221 TestMultiNode/serial/StopNode 2.14
222 TestMultiNode/serial/StartAfterStop 11.31
223 TestMultiNode/serial/RestartKeepsNodes 117.47
224 TestMultiNode/serial/DeleteNode 4.71
225 TestMultiNode/serial/StopMultiNode 23.88
226 TestMultiNode/serial/RestartMultiNode 78.41
227 TestMultiNode/serial/ValidateNameConflict 24.42
232 TestPreload 143.63
234 TestScheduledStopUnix 100.46
237 TestInsufficientStorage 13.09
240 TestKubernetesUpgrade 347.99
241 TestMissingContainerUpgrade 131.37
243 TestStoppedBinaryUpgrade/Setup 0.49
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
252 TestNoKubernetes/serial/StartWithK8s 34.35
254 TestNoKubernetes/serial/StartWithStopK8s 8.84
255 TestNoKubernetes/serial/Start 11.35
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
257 TestNoKubernetes/serial/ProfileList 1.39
258 TestNoKubernetes/serial/Stop 1.47
259 TestNoKubernetes/serial/StartNoArgs 8.5
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
268 TestNetworkPlugins/group/false 5.98
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
274 TestPause/serial/Start 74.46
275 TestPause/serial/SecondStartNoReconfiguration 29.88
276 TestPause/serial/Pause 0.72
277 TestPause/serial/VerifyStatus 0.3
278 TestPause/serial/Unpause 0.71
279 TestPause/serial/PauseAgain 0.87
280 TestPause/serial/DeletePaused 4.56
281 TestPause/serial/VerifyDeletedResources 0.61
283 TestStartStop/group/old-k8s-version/serial/FirstStart 130.65
285 TestStartStop/group/embed-certs/serial/FirstStart 70.93
286 TestStartStop/group/embed-certs/serial/DeployApp 7.41
288 TestStartStop/group/no-preload/serial/FirstStart 67.71
289 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
290 TestStartStop/group/embed-certs/serial/Stop 13.29
291 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
292 TestStartStop/group/embed-certs/serial/SecondStart 336.54
293 TestStartStop/group/old-k8s-version/serial/DeployApp 7.39
294 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.78
295 TestStartStop/group/old-k8s-version/serial/Stop 12.14
296 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
297 TestStartStop/group/old-k8s-version/serial/SecondStart 409.83
298 TestStartStop/group/no-preload/serial/DeployApp 10
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
300 TestStartStop/group/no-preload/serial/Stop 12.18
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
302 TestStartStop/group/no-preload/serial/SecondStart 335.88
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.05
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.42
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.07
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
313 TestStartStop/group/embed-certs/serial/Pause 2.71
315 TestStartStop/group/newest-cni/serial/FirstStart 38.63
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
318 TestStartStop/group/newest-cni/serial/Stop 1.22
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
320 TestStartStop/group/newest-cni/serial/SecondStart 27.8
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.02
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/no-preload/serial/Pause 2.84
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
328 TestStartStop/group/newest-cni/serial/Pause 2.75
329 TestNetworkPlugins/group/auto/Start 69.64
330 TestNetworkPlugins/group/kindnet/Start 72.03
331 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
332 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
333 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
334 TestStartStop/group/old-k8s-version/serial/Pause 2.85
335 TestNetworkPlugins/group/calico/Start 62.22
336 TestNetworkPlugins/group/auto/KubeletFlags 0.3
337 TestNetworkPlugins/group/auto/NetCatPod 11.3
338 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
341 TestNetworkPlugins/group/auto/DNS 0.21
342 TestNetworkPlugins/group/auto/Localhost 0.14
343 TestNetworkPlugins/group/auto/HairPin 0.16
344 TestNetworkPlugins/group/kindnet/DNS 0.19
345 TestNetworkPlugins/group/kindnet/Localhost 0.15
346 TestNetworkPlugins/group/kindnet/HairPin 0.15
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.08
348 TestNetworkPlugins/group/calico/ControllerPod 5.03
349 TestNetworkPlugins/group/calico/KubeletFlags 0.35
350 TestNetworkPlugins/group/calico/NetCatPod 9.61
351 TestNetworkPlugins/group/custom-flannel/Start 61.7
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
353 TestNetworkPlugins/group/enable-default-cni/Start 43.07
354 TestNetworkPlugins/group/calico/DNS 0.17
355 TestNetworkPlugins/group/calico/Localhost 0.15
356 TestNetworkPlugins/group/calico/HairPin 0.15
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.3
359 TestNetworkPlugins/group/flannel/Start 60.97
360 TestNetworkPlugins/group/bridge/Start 80.3
361 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
362 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
368 TestNetworkPlugins/group/custom-flannel/DNS 0.16
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
371 TestNetworkPlugins/group/flannel/ControllerPod 5.02
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/flannel/NetCatPod 10.26
374 TestNetworkPlugins/group/flannel/DNS 0.15
375 TestNetworkPlugins/group/flannel/Localhost 0.13
376 TestNetworkPlugins/group/flannel/HairPin 0.13
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
378 TestNetworkPlugins/group/bridge/NetCatPod 9.31
379 TestNetworkPlugins/group/bridge/DNS 0.16
380 TestNetworkPlugins/group/bridge/Localhost 0.13
381 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (12.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.716033709s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-892064
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-892064: exit status 85 (79.654911ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:10:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:10:07.476588  343640 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:10:07.476756  343640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:07.476764  343640 out.go:309] Setting ErrFile to fd 2...
	I1208 18:10:07.476769  343640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:07.476989  343640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	W1208 18:10:07.477103  343640 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: open /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: no such file or directory
	I1208 18:10:07.477771  343640 out.go:303] Setting JSON to true
	I1208 18:10:07.478809  343640 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6707,"bootTime":1702052300,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:10:07.478879  343640 start.go:138] virtualization: kvm guest
	I1208 18:10:07.481793  343640 out.go:97] [download-only-892064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:10:07.481943  343640 notify.go:220] Checking for updates...
	W1208 18:10:07.481960  343640 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 18:10:07.483714  343640 out.go:169] MINIKUBE_LOCATION=17738
	I1208 18:10:07.485282  343640 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:10:07.486864  343640 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:10:07.488266  343640 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:10:07.489698  343640 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 18:10:07.492324  343640 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 18:10:07.492627  343640 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:10:07.513615  343640 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:10:07.513739  343640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:07.565248  343640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-08 18:10:07.556552573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:07.565372  343640 docker.go:295] overlay module found
	I1208 18:10:07.567229  343640 out.go:97] Using the docker driver based on user configuration
	I1208 18:10:07.567254  343640 start.go:298] selected driver: docker
	I1208 18:10:07.567261  343640 start.go:902] validating driver "docker" against <nil>
	I1208 18:10:07.567387  343640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:07.626229  343640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-08 18:10:07.617637296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:07.626516  343640 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1208 18:10:07.627043  343640 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1208 18:10:07.627237  343640 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 18:10:07.629451  343640 out.go:169] Using Docker driver with root privileges
	I1208 18:10:07.630859  343640 cni.go:84] Creating CNI manager for ""
	I1208 18:10:07.630878  343640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:07.630890  343640 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 18:10:07.630898  343640 start_flags.go:323] config:
	{Name:download-only-892064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-892064 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:10:07.632517  343640 out.go:97] Starting control plane node download-only-892064 in cluster download-only-892064
	I1208 18:10:07.632530  343640 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:10:07.633829  343640 out.go:97] Pulling base image ...
	I1208 18:10:07.633854  343640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1208 18:10:07.633969  343640 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:10:07.649236  343640 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:07.649455  343640 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory
	I1208 18:10:07.649551  343640 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:07.670509  343640 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:07.670537  343640 cache.go:56] Caching tarball of preloaded images
	I1208 18:10:07.670709  343640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1208 18:10:07.672961  343640 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1208 18:10:07.673005  343640 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:07.710675  343640 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:12.333297  343640 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:12.333400  343640 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:13.246983  343640 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1208 18:10:13.247328  343640 profile.go:148] Saving config to /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/download-only-892064/config.json ...
	I1208 18:10:13.247357  343640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/download-only-892064/config.json: {Name:mkfd647ab483df09862bfb5942cd2e2fb15965cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 18:10:13.247532  343640 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1208 18:10:13.247733  343640 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-892064"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.767567512s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-892064
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-892064: exit status 85 (76.712436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:10:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:10:20.276729  343806 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:10:20.276903  343806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:20.276914  343806 out.go:309] Setting ErrFile to fd 2...
	I1208 18:10:20.276919  343806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:20.277119  343806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	W1208 18:10:20.277264  343806 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: open /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: no such file or directory
	I1208 18:10:20.277765  343806 out.go:303] Setting JSON to true
	I1208 18:10:20.278699  343806 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6720,"bootTime":1702052300,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:10:20.278770  343806 start.go:138] virtualization: kvm guest
	I1208 18:10:20.281181  343806 out.go:97] [download-only-892064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:10:20.283035  343806 out.go:169] MINIKUBE_LOCATION=17738
	I1208 18:10:20.281422  343806 notify.go:220] Checking for updates...
	I1208 18:10:20.286247  343806 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:10:20.287770  343806 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:10:20.289283  343806 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:10:20.290899  343806 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 18:10:20.293651  343806 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 18:10:20.294152  343806 config.go:182] Loaded profile config "download-only-892064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1208 18:10:20.294234  343806 start.go:810] api.Load failed for download-only-892064: filestore "download-only-892064": Docker machine "download-only-892064" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 18:10:20.294317  343806 driver.go:392] Setting default libvirt URI to qemu:///system
	W1208 18:10:20.294351  343806 start.go:810] api.Load failed for download-only-892064: filestore "download-only-892064": Docker machine "download-only-892064" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 18:10:20.316806  343806 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:10:20.316910  343806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:20.367096  343806 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:20.358855015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:20.367186  343806 docker.go:295] overlay module found
	I1208 18:10:20.369368  343806 out.go:97] Using the docker driver based on existing profile
	I1208 18:10:20.369398  343806 start.go:298] selected driver: docker
	I1208 18:10:20.369404  343806 start.go:902] validating driver "docker" against &{Name:download-only-892064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-892064 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:10:20.369556  343806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:20.420703  343806 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:20.41236773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:20.421316  343806 cni.go:84] Creating CNI manager for ""
	I1208 18:10:20.421334  343806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:20.421346  343806 start_flags.go:323] config:
	{Name:download-only-892064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-892064 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1208 18:10:20.423287  343806 out.go:97] Starting control plane node download-only-892064 in cluster download-only-892064
	I1208 18:10:20.423322  343806 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:10:20.424698  343806 out.go:97] Pulling base image ...
	I1208 18:10:20.424721  343806 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:20.424835  343806 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:10:20.439905  343806 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:20.440045  343806 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory
	I1208 18:10:20.440065  343806 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory, skipping pull
	I1208 18:10:20.440072  343806 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in cache, skipping pull
	I1208 18:10:20.440083  343806 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 as a tarball
	I1208 18:10:20.456017  343806 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:20.456054  343806 cache.go:56] Caching tarball of preloaded images
	I1208 18:10:20.456187  343806 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1208 18:10:20.458022  343806 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1208 18:10:20.458050  343806 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:20.495085  343806 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:24.369096  343806 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:24.369223  343806 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-892064"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (6.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-892064 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.609105002s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (6.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-892064
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-892064: exit status 85 (77.026706ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-892064 | jenkins | v1.32.0 | 08 Dec 23 18:10 UTC |          |
	|         | -p download-only-892064           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/08 18:10:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 18:10:26.118889  343951 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:10:26.119144  343951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:26.119152  343951 out.go:309] Setting ErrFile to fd 2...
	I1208 18:10:26.119157  343951 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:10:26.119336  343951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	W1208 18:10:26.119439  343951 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: open /home/jenkins/minikube-integration/17738-336823/.minikube/config/config.json: no such file or directory
	I1208 18:10:26.119908  343951 out.go:303] Setting JSON to true
	I1208 18:10:26.120789  343951 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6726,"bootTime":1702052300,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:10:26.120858  343951 start.go:138] virtualization: kvm guest
	I1208 18:10:26.123131  343951 out.go:97] [download-only-892064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:10:26.123298  343951 notify.go:220] Checking for updates...
	I1208 18:10:26.124866  343951 out.go:169] MINIKUBE_LOCATION=17738
	I1208 18:10:26.126433  343951 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:10:26.128163  343951 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:10:26.129674  343951 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:10:26.131436  343951 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 18:10:26.134107  343951 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 18:10:26.134622  343951 config.go:182] Loaded profile config "download-only-892064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1208 18:10:26.134684  343951 start.go:810] api.Load failed for download-only-892064: filestore "download-only-892064": Docker machine "download-only-892064" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 18:10:26.134766  343951 driver.go:392] Setting default libvirt URI to qemu:///system
	W1208 18:10:26.134796  343951 start.go:810] api.Load failed for download-only-892064: filestore "download-only-892064": Docker machine "download-only-892064" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1208 18:10:26.155227  343951 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:10:26.155363  343951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:26.210061  343951 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:26.201927151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:26.210163  343951 docker.go:295] overlay module found
	I1208 18:10:26.212136  343951 out.go:97] Using the docker driver based on existing profile
	I1208 18:10:26.212176  343951 start.go:298] selected driver: docker
	I1208 18:10:26.212184  343951 start.go:902] validating driver "docker" against &{Name:download-only-892064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-892064 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:10:26.212348  343951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:10:26.262988  343951 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-08 18:10:26.254603109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:10:26.263674  343951 cni.go:84] Creating CNI manager for ""
	I1208 18:10:26.263693  343951 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1208 18:10:26.263722  343951 start_flags.go:323] config:
	{Name:download-only-892064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-892064 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1208 18:10:26.265596  343951 out.go:97] Starting control plane node download-only-892064 in cluster download-only-892064
	I1208 18:10:26.265622  343951 cache.go:121] Beginning downloading kic base image for docker with crio
	I1208 18:10:26.267067  343951 out.go:97] Pulling base image ...
	I1208 18:10:26.267100  343951 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1208 18:10:26.267234  343951 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local docker daemon
	I1208 18:10:26.283513  343951 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 to local cache
	I1208 18:10:26.283645  343951 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory
	I1208 18:10:26.283660  343951 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 in local cache directory, skipping pull
	I1208 18:10:26.283664  343951 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 exists in cache, skipping pull
	I1208 18:10:26.283672  343951 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 as a tarball
	I1208 18:10:26.313136  343951 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1208 18:10:26.313165  343951 cache.go:56] Caching tarball of preloaded images
	I1208 18:10:26.313315  343951 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1208 18:10:26.315116  343951 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1208 18:10:26.315135  343951 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1208 18:10:26.345077  343951 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17738-336823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-892064"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-892064
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.3s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-819225 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-819225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-819225
--- PASS: TestDownloadOnlyKic (1.30s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-908328 --alsologtostderr --binary-mirror http://127.0.0.1:44187 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-908328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-908328
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (83.18s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-858514 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-858514 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m20.735381982s)
helpers_test.go:175: Cleaning up "offline-crio-858514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-858514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-858514: (2.449154566s)
--- PASS: TestOffline (83.18s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-766826
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-766826: exit status 85 (65.351453ms)

                                                
                                                
-- stdout --
	* Profile "addons-766826" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-766826"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-766826
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-766826: exit status 85 (66.790244ms)

                                                
                                                
-- stdout --
	* Profile "addons-766826" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-766826"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-766826 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-766826 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.592721575s)
--- PASS: TestAddons/Setup (151.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.254109ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-n29ff" [51d60be4-1fcd-4243-a9f5-b01f0c18e985] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012652478s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pg8rp" [831df691-d6e7-47e4-81c5-ec68788fcdb4] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.056388204s
addons_test.go:339: (dbg) Run:  kubectl --context addons-766826 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-766826 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-766826 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.142328035s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.998517ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zrxqf" [96be6ea9-f7ed-447e-96f0-2de2852c5689] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014388882s
addons_test.go:414: (dbg) Run:  kubectl --context addons-766826 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.81s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 62.973975ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-lf6zk" [bb5789f0-c460-44b1-8cef-9b34b3892cf5] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012316825s
addons_test.go:472: (dbg) Run:  kubectl --context addons-766826 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-766826 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.22240733s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 14.436361ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-766826 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-766826 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d046ecd-60ea-4e24-b441-8486e060b8f0] Pending
helpers_test.go:344: "task-pv-pod" [3d046ecd-60ea-4e24-b441-8486e060b8f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d046ecd-60ea-4e24-b441-8486e060b8f0] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.010789078s
addons_test.go:583: (dbg) Run:  kubectl --context addons-766826 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-766826 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-766826 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-766826 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-766826 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-766826 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-766826 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-766826 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cb137d24-d069-403e-b70b-de65661121e5] Pending
helpers_test.go:344: "task-pv-pod-restore" [cb137d24-d069-403e-b70b-de65661121e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cb137d24-d069-403e-b70b-de65661121e5] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.012470487s
addons_test.go:625: (dbg) Run:  kubectl --context addons-766826 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-766826 delete pod task-pv-pod-restore: (1.156673539s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-766826 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-766826 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.597057947s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-766826 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-766826 --alsologtostderr -v=1: (1.032027693s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-xwtlg" [76162a62-6a1b-4542-86b4-b66fed2c5232] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-xwtlg" [76162a62-6a1b-4542-86b4-b66fed2c5232] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.040899686s
--- PASS: TestAddons/parallel/Headlamp (12.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-7j9q4" [6ee71215-c72e-4f5b-a1f4-2b2945ecbc23] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009253114s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-766826
--- PASS: TestAddons/parallel/CloudSpanner (5.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-766826 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-766826 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c2ba0104-93c4-4926-a2a6-cb067f8f8fbd] Pending
helpers_test.go:344: "test-local-path" [c2ba0104-93c4-4926-a2a6-cb067f8f8fbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c2ba0104-93c4-4926-a2a6-cb067f8f8fbd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c2ba0104-93c4-4926-a2a6-cb067f8f8fbd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.0097049s
addons_test.go:890: (dbg) Run:  kubectl --context addons-766826 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 ssh "cat /opt/local-path-provisioner/pvc-de77890f-3fa6-42c6-805e-20b83a22f899_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-766826 delete pod test-local-path
2023/12/08 18:13:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:915: (dbg) Run:  kubectl --context addons-766826 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-766826 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-766826 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.100263353s)
--- PASS: TestAddons/parallel/LocalPath (56.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2vjv7" [fbd353d3-71e8-4b51-9170-9716493afe0b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010782885s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-766826
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-766826 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-766826 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-766826
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-766826: (11.918885375s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-766826
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-766826
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-766826
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (26.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-242960 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-242960 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.381864405s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-242960 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-242960 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-242960 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-242960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-242960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-242960: (2.020956063s)
--- PASS: TestCertOptions (26.01s)

                                                
                                    
x
+
TestCertExpiration (230.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-894017 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1208 18:43:06.943730  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:43:18.044754  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-894017 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.322945067s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-894017 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-894017 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.555144655s)
helpers_test.go:175: Cleaning up "cert-expiration-894017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-894017
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-894017: (2.223388355s)
--- PASS: TestCertExpiration (230.10s)

                                                
                                    
x
+
TestForceSystemdFlag (30.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-416219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1208 18:44:47.252848  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-416219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.695785912s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-416219 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-416219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-416219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-416219: (2.320939199s)
--- PASS: TestForceSystemdFlag (30.29s)

                                                
                                    
x
+
TestForceSystemdEnv (29.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-742678 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-742678 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.152295013s)
helpers_test.go:175: Cleaning up "force-systemd-env-742678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-742678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-742678: (2.567129968s)
--- PASS: TestForceSystemdEnv (29.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.01s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.01s)

                                                
                                    
x
+
TestErrorSpam/setup (24.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-069042 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-069042 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-069042 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-069042 --driver=docker  --container-runtime=crio: (24.743251306s)
--- PASS: TestErrorSpam/setup (24.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 stop: (1.213021312s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-069042 --log_dir /tmp/nospam-069042 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17738-336823/.minikube/files/etc/test/nested/copy/343628/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1208 18:18:06.943913  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:06.949751  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:06.960499  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:06.980768  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:07.021045  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:07.101379  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:07.261796  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:07.582360  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:08.223356  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:09.503880  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:12.065638  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:17.186087  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-290514 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.422824931s)
--- PASS: TestFunctional/serial/StartWithProxy (69.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --alsologtostderr -v=8
E1208 18:18:27.426957  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:18:47.908179  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-290514 --alsologtostderr -v=8: (32.596665017s)
functional_test.go:659: soft start took 32.59783524s for "functional-290514" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-290514 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 cache add registry.k8s.io/pause:3.3: (1.023889611s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-290514 /tmp/TestFunctionalserialCacheCmdcacheadd_local487017238/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache add minikube-local-cache-test:functional-290514
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache delete minikube-local-cache-test:functional-290514
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-290514
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.722497ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 kubectl -- --context functional-290514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-290514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 18:19:28.869831  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-290514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.786077285s)
functional_test.go:757: restart took 38.786237167s for "functional-290514" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-290514 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 logs: (1.364369935s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 logs --file /tmp/TestFunctionalserialLogsFileCmd1099860446/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 logs --file /tmp/TestFunctionalserialLogsFileCmd1099860446/001/logs.txt: (1.370101859s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-290514 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-290514
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-290514: exit status 115 (332.262475ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32058 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-290514 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-290514 delete -f testdata/invalidsvc.yaml: (1.050047283s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 config get cpus: exit status 14 (83.634928ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 config get cpus: exit status 14 (80.370868ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-290514 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-290514 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 381879: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-290514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (177.75903ms)

                                                
                                                
-- stdout --
	* [functional-290514] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:20:09.103123  380708 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:20:09.103256  380708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:09.103265  380708 out.go:309] Setting ErrFile to fd 2...
	I1208 18:20:09.103269  380708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:09.103451  380708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:20:09.104002  380708 out.go:303] Setting JSON to false
	I1208 18:20:09.105055  380708 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7309,"bootTime":1702052300,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:20:09.105122  380708 start.go:138] virtualization: kvm guest
	I1208 18:20:09.107661  380708 out.go:177] * [functional-290514] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:20:09.109386  380708 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:20:09.110820  380708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:20:09.109462  380708 notify.go:220] Checking for updates...
	I1208 18:20:09.113615  380708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:20:09.115135  380708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:20:09.116439  380708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:20:09.117784  380708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:20:09.120418  380708 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:20:09.121152  380708 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:20:09.155736  380708 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:20:09.155839  380708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:20:09.208849  380708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-08 18:20:09.200787648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:20:09.208948  380708 docker.go:295] overlay module found
	I1208 18:20:09.210899  380708 out.go:177] * Using the docker driver based on existing profile
	I1208 18:20:09.212258  380708 start.go:298] selected driver: docker
	I1208 18:20:09.212278  380708 start.go:902] validating driver "docker" against &{Name:functional-290514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-290514 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:20:09.212384  380708 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:20:09.214643  380708 out.go:177] 
	W1208 18:20:09.216173  380708 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 18:20:09.217645  380708 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-290514 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (270.587336ms)

                                                
                                                
-- stdout --
	* [functional-290514] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:20:09.605615  380889 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:20:09.605844  380889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:09.605857  380889 out.go:309] Setting ErrFile to fd 2...
	I1208 18:20:09.605864  380889 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:20:09.606225  380889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:20:09.606843  380889 out.go:303] Setting JSON to false
	I1208 18:20:09.608062  380889 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7310,"bootTime":1702052300,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:20:09.608168  380889 start.go:138] virtualization: kvm guest
	I1208 18:20:09.610988  380889 out.go:177] * [functional-290514] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1208 18:20:09.612711  380889 notify.go:220] Checking for updates...
	I1208 18:20:09.614056  380889 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:20:09.615475  380889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:20:09.616861  380889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:20:09.618162  380889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:20:09.619946  380889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:20:09.621598  380889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:20:09.623667  380889 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:20:09.624406  380889 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:20:09.668272  380889 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:20:09.668404  380889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:20:09.770698  380889 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:51 SystemTime:2023-12-08 18:20:09.756443181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:20:09.770836  380889 docker.go:295] overlay module found
	I1208 18:20:09.772956  380889 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1208 18:20:09.774348  380889 start.go:298] selected driver: docker
	I1208 18:20:09.774368  380889 start.go:902] validating driver "docker" against &{Name:functional-290514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701996201-17738@sha256:762cf4043ae4a952648fa2b64c30a2e88a4a1a052facb1120d8e17b35444edf0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-290514 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1208 18:20:09.774523  380889 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:20:09.777157  380889 out.go:177] 
	W1208 18:20:09.779700  380889 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 18:20:09.781171  380889 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-290514 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-290514 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ttdxr" [23247609-87cf-4ab5-bc9c-13ee4e992b44] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ttdxr" [23247609-87cf-4ab5-bc9c-13ee4e992b44] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.078195144s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30975
functional_test.go:1674: http://192.168.49.2:30975: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ttdxr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30975
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [38e95964-0d37-4aca-95d8-936a54abdfb9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01773451s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-290514 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-290514 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-290514 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-290514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [33a6137d-fd3b-4fcf-a051-69c24d2ed723] Pending
helpers_test.go:344: "sp-pod" [33a6137d-fd3b-4fcf-a051-69c24d2ed723] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [33a6137d-fd3b-4fcf-a051-69c24d2ed723] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.011618227s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-290514 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-290514 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-290514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6541aec5-4b34-4304-8f94-55809b57691a] Pending
helpers_test.go:344: "sp-pod" [6541aec5-4b34-4304-8f94-55809b57691a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6541aec5-4b34-4304-8f94-55809b57691a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.086687025s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-290514 exec sp-pod -- ls /tmp/mount
2023/12/08 18:20:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh -n functional-290514 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 cp functional-290514:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1853942338/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh -n functional-290514 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-290514 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-9q5qj" [175d3bc2-43d6-413e-964f-bb2137876595] Pending
helpers_test.go:344: "mysql-859648c796-9q5qj" [175d3bc2-43d6-413e-964f-bb2137876595] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-9q5qj" [175d3bc2-43d6-413e-964f-bb2137876595] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.009564775s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-290514 exec mysql-859648c796-9q5qj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-290514 exec mysql-859648c796-9q5qj -- mysql -ppassword -e "show databases;": exit status 1 (141.288196ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-290514 exec mysql-859648c796-9q5qj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-290514 exec mysql-859648c796-9q5qj -- mysql -ppassword -e "show databases;": exit status 1 (134.801558ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-290514 exec mysql-859648c796-9q5qj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/343628/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /etc/test/nested/copy/343628/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/343628.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /etc/ssl/certs/343628.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/343628.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /usr/share/ca-certificates/343628.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3436282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /etc/ssl/certs/3436282.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3436282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /usr/share/ca-certificates/3436282.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-290514 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "sudo systemctl is-active docker": exit status 1 (343.853088ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "sudo systemctl is-active containerd": exit status 1 (347.92781ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290514 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-290514
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290514 image ls --format short --alsologtostderr:
I1208 18:20:15.541777  382636 out.go:296] Setting OutFile to fd 1 ...
I1208 18:20:15.542114  382636 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:15.542126  382636 out.go:309] Setting ErrFile to fd 2...
I1208 18:20:15.542134  382636 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:15.542475  382636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
I1208 18:20:15.543359  382636 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:15.543552  382636 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:15.544120  382636 cli_runner.go:164] Run: docker container inspect functional-290514 --format={{.State.Status}}
I1208 18:20:15.564540  382636 ssh_runner.go:195] Run: systemctl --version
I1208 18:20:15.564602  382636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290514
I1208 18:20:15.583906  382636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/functional-290514/id_rsa Username:docker}
I1208 18:20:15.723174  382636 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290514 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/google-containers/addon-resizer  | functional-290514  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-290514  | bfc3f25c471d0 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| docker.io/library/nginx                 | alpine             | 01e5c69afaf63 | 44.4MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290514 image ls --format table --alsologtostderr:
I1208 18:20:18.937372  383404 out.go:296] Setting OutFile to fd 1 ...
I1208 18:20:18.937550  383404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:18.937564  383404 out.go:309] Setting ErrFile to fd 2...
I1208 18:20:18.937573  383404 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:18.937934  383404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
I1208 18:20:18.938881  383404 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:18.939053  383404 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:18.939742  383404 cli_runner.go:164] Run: docker container inspect functional-290514 --format={{.State.Status}}
I1208 18:20:18.956248  383404 ssh_runner.go:195] Run: systemctl --version
I1208 18:20:18.956317  383404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290514
I1208 18:20:18.976511  383404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/functional-290514/id_rsa Username:docker}
I1208 18:20:19.066912  383404 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290514 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"bfc3f25c471d0d9ca2d21f8916b0d1afd7fca0aa1f96f92c0130e96f08296577","repoDigests":["localhost/my-image@sha256:2e8152f111c67b6ea1f66d1de04c0febe7cba82f67835ba3aae0473cb871d0ea"],"repoTags":["localhost/my-image:functional-290514"],"size":"1468194"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"bdba757bc9336a536d68
84ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":["docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc","docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44421929"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-290514"],"size":"34114467"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881
a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registr
y.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"433dc8da9992479b76f1879730f44e14980f446cb8b3341d54238b623eb6aaa8","repoDigests":["docker.io/library/646c29dff168ca52382d2993a70bf50fb5fd53029beed1d025efee79d9b6bcd3-tmp@sha256:733a692f53ef428f57b222e2b9682ef710bd624c4baf98ee4a08c3640c301f01"],"repoTags"
:[],"size":"1465612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930
f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256
:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290514 image ls --format json --alsologtostderr:
I1208 18:20:18.702116  383358 out.go:296] Setting OutFile to fd 1 ...
I1208 18:20:18.702232  383358 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:18.702240  383358 out.go:309] Setting ErrFile to fd 2...
I1208 18:20:18.702245  383358 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:18.702467  383358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
I1208 18:20:18.703072  383358 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:18.703173  383358 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:18.703631  383358 cli_runner.go:164] Run: docker container inspect functional-290514 --format={{.State.Status}}
I1208 18:20:18.722126  383358 ssh_runner.go:195] Run: systemctl --version
I1208 18:20:18.722192  383358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290514
I1208 18:20:18.738964  383358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/functional-290514/id_rsa Username:docker}
I1208 18:20:18.823645  383358 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290514 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
- docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc
repoTags:
- docker.io/library/nginx:alpine
size: "44421929"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-290514
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290514 image ls --format yaml --alsologtostderr:
I1208 18:20:15.832830  382679 out.go:296] Setting OutFile to fd 1 ...
I1208 18:20:15.833114  382679 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:15.833123  382679 out.go:309] Setting ErrFile to fd 2...
I1208 18:20:15.833128  382679 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:15.833336  382679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
I1208 18:20:15.833912  382679 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:15.834024  382679 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:15.834481  382679 cli_runner.go:164] Run: docker container inspect functional-290514 --format={{.State.Status}}
I1208 18:20:15.856742  382679 ssh_runner.go:195] Run: systemctl --version
I1208 18:20:15.856796  382679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290514
I1208 18:20:15.877482  382679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/functional-290514/id_rsa Username:docker}
I1208 18:20:16.019308  382679 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh pgrep buildkitd: exit status 1 (363.643663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image build -t localhost/my-image:functional-290514 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 image build -t localhost/my-image:functional-290514 testdata/build --alsologtostderr: (1.972033259s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290514 image build -t localhost/my-image:functional-290514 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 433dc8da999
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-290514
--> bfc3f25c471
Successfully tagged localhost/my-image:functional-290514
bfc3f25c471d0d9ca2d21f8916b0d1afd7fca0aa1f96f92c0130e96f08296577
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290514 image build -t localhost/my-image:functional-290514 testdata/build --alsologtostderr:
I1208 18:20:16.495408  382825 out.go:296] Setting OutFile to fd 1 ...
I1208 18:20:16.495695  382825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:16.495705  382825 out.go:309] Setting ErrFile to fd 2...
I1208 18:20:16.495713  382825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1208 18:20:16.495940  382825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
I1208 18:20:16.496600  382825 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:16.497205  382825 config.go:182] Loaded profile config "functional-290514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1208 18:20:16.497664  382825 cli_runner.go:164] Run: docker container inspect functional-290514 --format={{.State.Status}}
I1208 18:20:16.513772  382825 ssh_runner.go:195] Run: systemctl --version
I1208 18:20:16.513817  382825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290514
I1208 18:20:16.537071  382825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/functional-290514/id_rsa Username:docker}
I1208 18:20:16.723457  382825 build_images.go:151] Building image from path: /tmp/build.2966194744.tar
I1208 18:20:16.723531  382825 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 18:20:16.732558  382825 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2966194744.tar
I1208 18:20:16.736404  382825 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2966194744.tar: stat -c "%s %y" /var/lib/minikube/build/build.2966194744.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2966194744.tar': No such file or directory
I1208 18:20:16.736447  382825 ssh_runner.go:362] scp /tmp/build.2966194744.tar --> /var/lib/minikube/build/build.2966194744.tar (3072 bytes)
I1208 18:20:16.820696  382825 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2966194744
I1208 18:20:16.831904  382825 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2966194744 -xf /var/lib/minikube/build/build.2966194744.tar
I1208 18:20:16.841298  382825 crio.go:297] Building image: /var/lib/minikube/build/build.2966194744
I1208 18:20:16.841374  382825 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-290514 /var/lib/minikube/build/build.2966194744 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1208 18:20:18.380401  382825 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-290514 /var/lib/minikube/build/build.2966194744 --cgroup-manager=cgroupfs: (1.538996314s)
I1208 18:20:18.380466  382825 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2966194744
I1208 18:20:18.388820  382825 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2966194744.tar
I1208 18:20:18.396913  382825 build_images.go:207] Built localhost/my-image:functional-290514 from /tmp/build.2966194744.tar
I1208 18:20:18.396947  382825 build_images.go:123] succeeded building to: functional-290514
I1208 18:20:18.396951  382825 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-290514
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-290514 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-290514 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-ptsqm" [086adf3d-8b28-4f25-b95f-2446c0b9da41] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-ptsqm" [086adf3d-8b28-4f25-b95f-2446c0b9da41] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.017174674s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr: (4.092228395s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 376858: os: process already finished
helpers_test.go:502: unable to terminate pid 376572: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-290514 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d1c037ac-957f-45af-935e-a57b69429446] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d1c037ac-957f-45af-935e-a57b69429446] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.071979421s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr: (2.902649335s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-290514
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-290514 image load --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr: (5.51872135s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service list -o json
functional_test.go:1493: Took "326.813298ms" to run "out/minikube-linux-amd64 -p functional-290514 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31656
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-290514 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.22.46 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-290514 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31656
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image save gcr.io/google-containers/addon-resizer:functional-290514 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "319.423525ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "63.624549ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "316.378334ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "66.450793ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image rm gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdany-port3617966594/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702059602493087460" to /tmp/TestFunctionalparallelMountCmdany-port3617966594/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702059602493087460" to /tmp/TestFunctionalparallelMountCmdany-port3617966594/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702059602493087460" to /tmp/TestFunctionalparallelMountCmdany-port3617966594/001/test-1702059602493087460
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.910604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 18:20 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 18:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 18:20 test-1702059602493087460
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh cat /mount-9p/test-1702059602493087460
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-290514 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b23ce8f5-1d15-42aa-8175-8a45080855bf] Pending
helpers_test.go:344: "busybox-mount" [b23ce8f5-1d15-42aa-8175-8a45080855bf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b23ce8f5-1d15-42aa-8175-8a45080855bf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b23ce8f5-1d15-42aa-8175-8a45080855bf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.069875412s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-290514 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdany-port3617966594/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-290514
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 image save --daemon gcr.io/google-containers/addon-resizer:functional-290514 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-290514
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdspecific-port2522584406/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.201746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdspecific-port2522584406/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "sudo umount -f /mount-9p": exit status 1 (385.067391ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-290514 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdspecific-port2522584406/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T" /mount1: exit status 1 (491.252991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290514 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-290514 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2944955075/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-290514
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-290514
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-290514
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-722179 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1208 18:20:50.790132  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-722179 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m12.152047685s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons enable ingress --alsologtostderr -v=5: (10.409859919s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-722179 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-739595 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1208 18:24:57.493813  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:25:07.734183  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:25:28.215322  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-739595 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.08375201s)
--- PASS: TestJSONOutput/start/Command (69.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-739595 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-739595 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-739595 --output=json --user=testUser
E1208 18:26:09.176570  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-739595 --output=json --user=testUser: (5.709550387s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-500443 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-500443 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.247304ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a25a6e76-14e8-4adc-ad29-ecb95353c7ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-500443] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b799192-4ea1-4fd3-bba8-9003ea410da4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17738"}}
	{"specversion":"1.0","id":"eb676bda-9d26-42cc-a27b-bb8e2444897d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"685187db-6a19-432a-9887-4b9c9bdd7d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig"}}
	{"specversion":"1.0","id":"1efa9357-2f9f-44d4-964f-4cf1abc2bc6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube"}}
	{"specversion":"1.0","id":"5f5f5393-6243-477b-8b7f-7c685a583eec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3bffc9cc-7d6f-4443-9b53-a936a72012de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a664fec-b08b-42ac-aa3e-754ea103301c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-500443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-500443
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-742720 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-742720 --network=: (30.840467483s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-742720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-742720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-742720: (2.050433208s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-526106 --network=bridge
E1208 18:26:54.997695  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.003731  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.014266  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.035422  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.075709  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.156114  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.316545  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:55.636910  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:56.277840  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:26:57.558862  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:27:00.119709  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
E1208 18:27:05.240394  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-526106 --network=bridge: (25.50475034s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-526106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-526106
E1208 18:27:15.480833  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-526106: (1.984608001s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.51s)

                                                
                                    
x
+
TestKicExistingNetwork (26.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-672200 --network=existing-network
E1208 18:27:31.097396  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
E1208 18:27:35.961083  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-672200 --network=existing-network: (24.835805198s)
helpers_test.go:175: Cleaning up "existing-network-672200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-672200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-672200: (1.968056214s)
--- PASS: TestKicExistingNetwork (26.94s)

                                                
                                    
x
+
TestKicCustomSubnet (27.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-411236 --subnet=192.168.60.0/24
E1208 18:28:06.943733  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-411236 --subnet=192.168.60.0/24: (25.25531583s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-411236 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-411236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-411236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-411236: (2.081850521s)
--- PASS: TestKicCustomSubnet (27.36s)

                                                
                                    
x
+
TestKicStaticIP (28.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-912494 --static-ip=192.168.200.200
E1208 18:28:16.922745  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-912494 --static-ip=192.168.200.200: (25.792827021s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-912494 ip
helpers_test.go:175: Cleaning up "static-ip-912494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-912494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-912494: (2.078816139s)
--- PASS: TestKicStaticIP (28.01s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-654483 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-654483 --driver=docker  --container-runtime=crio: (23.103317609s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-657302 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-657302 --driver=docker  --container-runtime=crio: (24.727109045s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-654483
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-657302
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-657302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-657302
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-657302: (1.872378951s)
helpers_test.go:175: Cleaning up "first-654483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-654483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-654483: (1.904764442s)
--- PASS: TestMinikubeProfile (52.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-395968 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-395968 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.222724309s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-395968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-414543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1208 18:29:38.843681  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-414543 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.968841266s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-395968 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-395968 --alsologtostderr -v=5: (1.614357644s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414543 ssh -- ls /minikube-host
E1208 18:29:47.253709  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-414543
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-414543: (1.206363299s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-414543
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-414543: (5.972708253s)
--- PASS: TestMountStart/serial/RestartStopped (6.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-414543 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-985452 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1208 18:30:14.938597  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-985452 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.139992404s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-985452 -- rollout status deployment/busybox: (1.537009266s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-mb9gz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-985452 -- exec busybox-5bc68d56bd-wwj6s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-985452 -v 3 --alsologtostderr
E1208 18:31:54.998390  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-985452 -v 3 --alsologtostderr: (48.737077247s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-985452 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp testdata/cp-test.txt multinode-985452:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile64068920/001/cp-test_multinode-985452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452:/home/docker/cp-test.txt multinode-985452-m02:/home/docker/cp-test_multinode-985452_multinode-985452-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test_multinode-985452_multinode-985452-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452:/home/docker/cp-test.txt multinode-985452-m03:/home/docker/cp-test_multinode-985452_multinode-985452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test_multinode-985452_multinode-985452-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp testdata/cp-test.txt multinode-985452-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile64068920/001/cp-test_multinode-985452-m02.txt
E1208 18:32:22.684150  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m02:/home/docker/cp-test.txt multinode-985452:/home/docker/cp-test_multinode-985452-m02_multinode-985452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test_multinode-985452-m02_multinode-985452.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m02:/home/docker/cp-test.txt multinode-985452-m03:/home/docker/cp-test_multinode-985452-m02_multinode-985452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test_multinode-985452-m02_multinode-985452-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp testdata/cp-test.txt multinode-985452-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile64068920/001/cp-test_multinode-985452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m03:/home/docker/cp-test.txt multinode-985452:/home/docker/cp-test_multinode-985452-m03_multinode-985452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452 "sudo cat /home/docker/cp-test_multinode-985452-m03_multinode-985452.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 cp multinode-985452-m03:/home/docker/cp-test.txt multinode-985452-m02:/home/docker/cp-test_multinode-985452-m03_multinode-985452-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 ssh -n multinode-985452-m02 "sudo cat /home/docker/cp-test_multinode-985452-m03_multinode-985452-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-985452 node stop m03: (1.212016532s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-985452 status: exit status 7 (462.549758ms)

                                                
                                                
-- stdout --
	multinode-985452
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985452-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-985452-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr: exit status 7 (468.077101ms)

                                                
                                                
-- stdout --
	multinode-985452
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985452-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-985452-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:32:29.550366  442921 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:32:29.550671  442921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:32:29.550681  442921 out.go:309] Setting ErrFile to fd 2...
	I1208 18:32:29.550686  442921 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:32:29.550876  442921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:32:29.551065  442921 out.go:303] Setting JSON to false
	I1208 18:32:29.551106  442921 mustload.go:65] Loading cluster: multinode-985452
	I1208 18:32:29.551222  442921 notify.go:220] Checking for updates...
	I1208 18:32:29.551603  442921 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:32:29.551625  442921 status.go:255] checking status of multinode-985452 ...
	I1208 18:32:29.552110  442921 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:32:29.568839  442921 status.go:330] multinode-985452 host status = "Running" (err=<nil>)
	I1208 18:32:29.568869  442921 host.go:66] Checking if "multinode-985452" exists ...
	I1208 18:32:29.569138  442921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452
	I1208 18:32:29.584582  442921 host.go:66] Checking if "multinode-985452" exists ...
	I1208 18:32:29.584878  442921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:32:29.584950  442921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452
	I1208 18:32:29.600192  442921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452/id_rsa Username:docker}
	I1208 18:32:29.691614  442921 ssh_runner.go:195] Run: systemctl --version
	I1208 18:32:29.695534  442921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:32:29.705407  442921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:32:29.758828  442921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-08 18:32:29.750652471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:32:29.759402  442921 kubeconfig.go:92] found "multinode-985452" server: "https://192.168.58.2:8443"
	I1208 18:32:29.759427  442921 api_server.go:166] Checking apiserver status ...
	I1208 18:32:29.759459  442921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 18:32:29.769472  442921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I1208 18:32:29.778038  442921 api_server.go:182] apiserver freezer: "3:freezer:/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio/crio-832d650180c47868224a6a49d711d5e35f85f3ef6632743de4b9a3d1759804a6"
	I1208 18:32:29.778100  442921 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f6d7ec17b6553b5decb9ae58a01d1be686266c13e3f56c76ad9d70c8de819c7/crio/crio-832d650180c47868224a6a49d711d5e35f85f3ef6632743de4b9a3d1759804a6/freezer.state
	I1208 18:32:29.786057  442921 api_server.go:204] freezer state: "THAWED"
	I1208 18:32:29.786083  442921 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1208 18:32:29.790221  442921 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1208 18:32:29.790254  442921 status.go:421] multinode-985452 apiserver status = Running (err=<nil>)
	I1208 18:32:29.790286  442921 status.go:257] multinode-985452 status: &{Name:multinode-985452 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 18:32:29.790320  442921 status.go:255] checking status of multinode-985452-m02 ...
	I1208 18:32:29.790683  442921 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Status}}
	I1208 18:32:29.807645  442921 status.go:330] multinode-985452-m02 host status = "Running" (err=<nil>)
	I1208 18:32:29.807680  442921 host.go:66] Checking if "multinode-985452-m02" exists ...
	I1208 18:32:29.807953  442921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-985452-m02
	I1208 18:32:29.823374  442921 host.go:66] Checking if "multinode-985452-m02" exists ...
	I1208 18:32:29.823620  442921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 18:32:29.823709  442921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985452-m02
	I1208 18:32:29.839992  442921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/17738-336823/.minikube/machines/multinode-985452-m02/id_rsa Username:docker}
	I1208 18:32:29.927337  442921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 18:32:29.937443  442921 status.go:257] multinode-985452-m02 status: &{Name:multinode-985452-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 18:32:29.937477  442921 status.go:255] checking status of multinode-985452-m03 ...
	I1208 18:32:29.937715  442921 cli_runner.go:164] Run: docker container inspect multinode-985452-m03 --format={{.State.Status}}
	I1208 18:32:29.953924  442921 status.go:330] multinode-985452-m03 host status = "Stopped" (err=<nil>)
	I1208 18:32:29.953948  442921 status.go:343] host is not running, skipping remaining checks
	I1208 18:32:29.953959  442921 status.go:257] multinode-985452-m03 status: &{Name:multinode-985452-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-985452 node start m03 --alsologtostderr: (10.643445427s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-985452
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-985452
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-985452: (24.774118488s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-985452 --wait=true -v=8 --alsologtostderr
E1208 18:33:06.944385  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:34:29.991383  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-985452 --wait=true -v=8 --alsologtostderr: (1m32.572284268s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-985452
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-985452 node delete m03: (4.10014686s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 stop
E1208 18:34:47.253196  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-985452 stop: (23.6888339s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-985452 status: exit status 7 (98.505118ms)

                                                
                                                
-- stdout --
	multinode-985452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-985452-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr: exit status 7 (95.461986ms)

                                                
                                                
-- stdout --
	multinode-985452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-985452-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:35:07.293643  453294 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:35:07.293929  453294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:35:07.293939  453294 out.go:309] Setting ErrFile to fd 2...
	I1208 18:35:07.293944  453294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:35:07.294126  453294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:35:07.294306  453294 out.go:303] Setting JSON to false
	I1208 18:35:07.294348  453294 mustload.go:65] Loading cluster: multinode-985452
	I1208 18:35:07.294478  453294 notify.go:220] Checking for updates...
	I1208 18:35:07.294760  453294 config.go:182] Loaded profile config "multinode-985452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:35:07.294777  453294 status.go:255] checking status of multinode-985452 ...
	I1208 18:35:07.295221  453294 cli_runner.go:164] Run: docker container inspect multinode-985452 --format={{.State.Status}}
	I1208 18:35:07.311649  453294 status.go:330] multinode-985452 host status = "Stopped" (err=<nil>)
	I1208 18:35:07.311673  453294 status.go:343] host is not running, skipping remaining checks
	I1208 18:35:07.311681  453294 status.go:257] multinode-985452 status: &{Name:multinode-985452 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 18:35:07.311723  453294 status.go:255] checking status of multinode-985452-m02 ...
	I1208 18:35:07.312080  453294 cli_runner.go:164] Run: docker container inspect multinode-985452-m02 --format={{.State.Status}}
	I1208 18:35:07.328665  453294 status.go:330] multinode-985452-m02 host status = "Stopped" (err=<nil>)
	I1208 18:35:07.328693  453294 status.go:343] host is not running, skipping remaining checks
	I1208 18:35:07.328703  453294 status.go:257] multinode-985452-m02 status: &{Name:multinode-985452-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-985452 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-985452 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.813125974s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-985452 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.41s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-985452
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-985452-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-985452-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.912879ms)

                                                
                                                
-- stdout --
	* [multinode-985452-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-985452-m02' is duplicated with machine name 'multinode-985452-m02' in profile 'multinode-985452'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-985452-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-985452-m03 --driver=docker  --container-runtime=crio: (22.162845996s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-985452
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-985452: exit status 80 (273.905528ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-985452
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-985452-m03 already exists in multinode-985452-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-985452-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-985452-m03: (1.839669887s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.42s)

                                                
                                    
x
+
TestPreload (143.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-278332 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-278332 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m9.275779093s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-278332 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-278332
E1208 18:38:06.943625  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-278332: (5.677844837s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-278332 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-278332 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m5.395269562s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-278332 image list
helpers_test.go:175: Cleaning up "test-preload-278332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-278332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-278332: (2.245865036s)
--- PASS: TestPreload (143.63s)

                                                
                                    
x
+
TestScheduledStopUnix (100.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-789728 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-789728 --memory=2048 --driver=docker  --container-runtime=crio: (23.953712466s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-789728 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-789728 -n scheduled-stop-789728
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-789728 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-789728 --cancel-scheduled
E1208 18:39:47.253185  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-789728 -n scheduled-stop-789728
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-789728
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-789728 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-789728
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-789728: exit status 7 (84.090595ms)

                                                
                                                
-- stdout --
	scheduled-stop-789728
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-789728 -n scheduled-stop-789728
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-789728 -n scheduled-stop-789728: exit status 7 (78.8341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-789728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-789728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-789728: (5.045450512s)
--- PASS: TestScheduledStopUnix (100.46s)

                                                
                                    
x
+
TestInsufficientStorage (13.09s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-578461 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E1208 18:41:10.299690  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-578461 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.714463771s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95d9b6d7-d502-4a8b-86db-039c5f0946ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-578461] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5422fd9-470a-445b-8d6a-5e94eb255380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17738"}}
	{"specversion":"1.0","id":"de6589db-6c88-4a4d-81ab-105cfbdf3c97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e410f9fb-dddd-4924-ac6d-34715f77c216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig"}}
	{"specversion":"1.0","id":"8d15cf5e-9040-45c8-a123-d36d3c5c7872","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube"}}
	{"specversion":"1.0","id":"39098e8f-27f3-42d4-8e7a-e71e8d04b9e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"08f5fbd7-1687-4505-9ffa-3483c14e8950","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d5f40a03-1898-4c8d-a463-9c949a216341","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3b8ffc0d-2931-4ff7-8013-9123e31ae514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"10996af0-7102-4434-989c-075ec3aa01ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a21e5e07-7045-4f97-acdc-9745ae7a6025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"47f05c00-3bea-432a-9d86-8d40c322f7ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-578461 in cluster insufficient-storage-578461","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f107b5e9-471b-4670-ab7b-0bf3e36fb0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"86b3f61e-bd57-4007-b3c5-bc0e586784f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"39f91d8d-7ab5-4f4a-a957-8aec82b68e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-578461 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-578461 --output=json --layout=cluster: exit status 7 (275.212066ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-578461","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-578461","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 18:41:10.895259  474842 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-578461" does not appear in /home/jenkins/minikube-integration/17738-336823/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-578461 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-578461 --output=json --layout=cluster: exit status 7 (270.324106ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-578461","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-578461","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 18:41:11.163730  474933 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-578461" does not appear in /home/jenkins/minikube-integration/17738-336823/kubeconfig
	E1208 18:41:11.173334  474933 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/insufficient-storage-578461/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-578461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-578461
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-578461: (1.827103463s)
--- PASS: TestInsufficientStorage (13.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.203282801s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-471792
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-471792: (3.828023205s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-471792 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-471792 status --format={{.Host}}: exit status 7 (83.291021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.734413553s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-471792 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (92.95698ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-471792] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-471792
	    minikube start -p kubernetes-upgrade-471792 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4717922 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-471792 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-471792 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.834953998s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-471792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-471792
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-471792: (2.136042119s)
--- PASS: TestKubernetesUpgrade (347.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (131.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.1166285000.exe start -p missing-upgrade-547671 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.1166285000.exe start -p missing-upgrade-547671 --memory=2200 --driver=docker  --container-runtime=crio: (1m5.889829687s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-547671
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-547671: (2.718055446s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-547671
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-547671 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-547671 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.207832367s)
helpers_test.go:175: Cleaning up "missing-upgrade-547671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-547671
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-547671: (2.075778835s)
--- PASS: TestMissingContainerUpgrade (131.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (87.38268ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-888395] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888395 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888395 --driver=docker  --container-runtime=crio: (33.993028161s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-888395 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --driver=docker  --container-runtime=crio: (6.524284567s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-888395 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-888395 status -o json: exit status 2 (338.898278ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-888395","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-888395
E1208 18:41:54.997416  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-888395: (1.978162102s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888395 --no-kubernetes --driver=docker  --container-runtime=crio: (11.345420384s)
--- PASS: TestNoKubernetes/serial/Start (11.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-888395 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-888395 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.720271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-888395
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-888395: (1.474215436s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-888395 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-888395 --driver=docker  --container-runtime=crio: (8.504500561s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-888395 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-888395 "sudo systemctl is-active --quiet service kubelet": exit status 1 (371.678127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-130225 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-130225 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (283.981083ms)

                                                
                                                
-- stdout --
	* [false-130225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 18:42:40.712246  499455 out.go:296] Setting OutFile to fd 1 ...
	I1208 18:42:40.712464  499455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:40.712504  499455 out.go:309] Setting ErrFile to fd 2...
	I1208 18:42:40.712529  499455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1208 18:42:40.712918  499455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17738-336823/.minikube/bin
	I1208 18:42:40.713781  499455 out.go:303] Setting JSON to false
	I1208 18:42:40.715904  499455 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8661,"bootTime":1702052300,"procs":468,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 18:42:40.716017  499455 start.go:138] virtualization: kvm guest
	I1208 18:42:40.719010  499455 out.go:177] * [false-130225] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1208 18:42:40.720810  499455 out.go:177]   - MINIKUBE_LOCATION=17738
	I1208 18:42:40.720706  499455 notify.go:220] Checking for updates...
	I1208 18:42:40.727575  499455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 18:42:40.731284  499455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17738-336823/kubeconfig
	I1208 18:42:40.733127  499455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17738-336823/.minikube
	I1208 18:42:40.739112  499455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 18:42:40.740663  499455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 18:42:40.742736  499455 config.go:182] Loaded profile config "force-systemd-env-742678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1208 18:42:40.742838  499455 config.go:182] Loaded profile config "running-upgrade-189872": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:40.742897  499455 config.go:182] Loaded profile config "stopped-upgrade-897546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1208 18:42:40.742980  499455 driver.go:392] Setting default libvirt URI to qemu:///system
	I1208 18:42:40.770504  499455 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1208 18:42:40.770640  499455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 18:42:40.882665  499455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:97 SystemTime:2023-12-08 18:42:40.870097772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1208 18:42:40.882791  499455 docker.go:295] overlay module found
	I1208 18:42:40.884471  499455 out.go:177] * Using the docker driver based on user configuration
	I1208 18:42:40.886485  499455 start.go:298] selected driver: docker
	I1208 18:42:40.886506  499455 start.go:902] validating driver "docker" against <nil>
	I1208 18:42:40.886521  499455 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 18:42:40.889345  499455 out.go:177] 
	W1208 18:42:40.890781  499455 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1208 18:42:40.892158  499455 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-130225 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-130225

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-130225"

                                                
                                                
----------------------- debugLogs end: false-130225 [took: 5.374806733s] --------------------------------
helpers_test.go:175: Cleaning up "false-130225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-130225
--- PASS: TestNetworkPlugins/group/false (5.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-897546
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestPause/serial/Start (74.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-442768 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-442768 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.458361633s)
--- PASS: TestPause/serial/Start (74.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-442768 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-442768 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.863904231s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.88s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-442768 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-442768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-442768 --output=json --layout=cluster: exit status 2 (302.727789ms)

                                                
                                                
-- stdout --
	{"Name":"pause-442768","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-442768","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-442768 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-442768 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.56s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-442768 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-442768 --alsologtostderr -v=5: (4.562583885s)
--- PASS: TestPause/serial/DeletePaused (4.56s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-442768
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-442768: exit status 1 (15.903005ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-442768: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-754199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-754199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m10.651113297s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-924934 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-924934 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.930416367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924934 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c5a64d8-dbdf-4a6a-b4cd-b85703a31718] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8c5a64d8-dbdf-4a6a-b4cd-b85703a31718] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.016711193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924934 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-554591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-554591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m7.708898171s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-924934 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-924934 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11686915s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-924934 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-924934 --alsologtostderr -v=3
E1208 18:46:54.998091  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-924934 --alsologtostderr -v=3: (13.286403135s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924934 -n embed-certs-924934
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924934 -n embed-certs-924934: exit status 7 (85.801481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-924934 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-924934 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-924934 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m36.079185766s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924934 -n embed-certs-924934
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-754199 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0581ec0-821e-44e5-8f03-04eeee4131a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0581ec0-821e-44e5-8f03-04eeee4131a9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.014587416s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-754199 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-754199 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-754199 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-754199 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-754199 --alsologtostderr -v=3: (12.138895941s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-754199 -n old-k8s-version-754199
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-754199 -n old-k8s-version-754199: exit status 7 (103.46066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-754199 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (409.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-754199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-754199 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m49.480145656s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-754199 -n old-k8s-version-754199
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (409.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554591 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [977ea46f-f458-4ac6-b142-e0899d91ff76] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [977ea46f-f458-4ac6-b142-e0899d91ff76] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.016022303s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-554591 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-554591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-554591 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-554591 --alsologtostderr -v=3
E1208 18:48:06.943685  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-554591 --alsologtostderr -v=3: (12.184368694s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554591 -n no-preload-554591
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554591 -n no-preload-554591: exit status 7 (98.861167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-554591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-554591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-554591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (5m35.372794196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-554591 -n no-preload-554591
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-194829 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1208 18:49:47.253236  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-194829 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.054176634s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-194829 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [296221ab-d0ab-4f46-8617-d4b4f8e97cfa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [296221ab-d0ab-4f46-8617-d4b4f8e97cfa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014971765s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-194829 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-194829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-194829 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-194829 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-194829 --alsologtostderr -v=3: (11.923731535s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829: exit status 7 (84.335742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-194829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-194829 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1208 18:51:09.992237  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
E1208 18:51:54.997918  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-194829 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.828141447s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t8js9" [5c37cc66-f9d6-48ac-b193-c06c1d793235] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t8js9" [5c37cc66-f9d6-48ac-b193-c06c1d793235] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.072274346s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t8js9" [5c37cc66-f9d6-48ac-b193-c06c1d793235] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009385928s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-924934 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-924934 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-924934 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924934 -n embed-certs-924934
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924934 -n embed-certs-924934: exit status 2 (295.691627ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924934 -n embed-certs-924934
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924934 -n embed-certs-924934: exit status 2 (298.064305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-924934 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924934 -n embed-certs-924934
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924934 -n embed-certs-924934
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671834 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1208 18:53:06.943677  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/addons-766826/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671834 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (38.633900244s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-671834 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-671834 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-671834 --alsologtostderr -v=3: (1.221748173s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671834 -n newest-cni-671834
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671834 -n newest-cni-671834: exit status 7 (96.718914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-671834 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671834 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671834 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (27.463866969s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671834 -n newest-cni-671834
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z49v5" [998c18b3-d83c-44e9-950d-b69ace062d59] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z49v5" [998c18b3-d83c-44e9-950d-b69ace062d59] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.019692097s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z49v5" [998c18b3-d83c-44e9-950d-b69ace062d59] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010720915s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-554591 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-554591 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-554591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554591 -n no-preload-554591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554591 -n no-preload-554591: exit status 2 (324.473113ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554591 -n no-preload-554591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554591 -n no-preload-554591: exit status 2 (311.009786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-554591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-554591 -n no-preload-554591
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-554591 -n no-preload-554591
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-671834 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-671834 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671834 -n newest-cni-671834
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671834 -n newest-cni-671834: exit status 2 (334.838681ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671834 -n newest-cni-671834
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671834 -n newest-cni-671834: exit status 2 (319.640223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-671834 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671834 -n newest-cni-671834
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671834 -n newest-cni-671834
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.644792112s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m12.025189859s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lzp2x" [09bb97c4-67a0-4a25-a36d-8808a63e91c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015440887s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lzp2x" [09bb97c4-67a0-4a25-a36d-8808a63e91c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008419153s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-754199 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-754199 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-754199 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-754199 -n old-k8s-version-754199
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-754199 -n old-k8s-version-754199: exit status 2 (330.553644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-754199 -n old-k8s-version-754199
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-754199 -n old-k8s-version-754199: exit status 2 (306.168474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-754199 --alsologtostderr -v=1
E1208 18:54:47.253666  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/functional-290514/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-754199 -n old-k8s-version-754199
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-754199 -n old-k8s-version-754199
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.222461104s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pzxrv" [c0e130d5-899c-470a-8a4d-58ce314cf1bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pzxrv" [c0e130d5-899c-470a-8a4d-58ce314cf1bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009170137s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5smf7" [a0589f0e-4b11-4408-b59f-7fd4a7ef6e75] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021020963s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qst6x" [aee4dbe3-4c0f-4ae8-b536-2d9885821b1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qst6x" [aee4dbe3-4c0f-4ae8-b536-2d9885821b1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00857333s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qtz7z" [3779add5-469b-4c5b-bd05-5a582d7a902a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qtz7z" [3779add5-469b-4c5b-bd05-5a582d7a902a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.082236803s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zj9xf" [0589cfea-5a86-4eb0-b994-ef4947b54915] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024406799s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-59tcv" [532c3474-2dac-4129-9b46-8c2219e26963] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-59tcv" [532c3474-2dac-4129-9b46-8c2219e26963] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.011700603s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.695759291s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qtz7z" [3779add5-469b-4c5b-bd05-5a582d7a902a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010660636s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-194829 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (43.068134047s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-194829 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-194829 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829: exit status 2 (306.164308ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829: exit status 2 (303.171344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-194829 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-194829 --alsologtostderr -v=1: (1.014459883s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-194829 -n default-k8s-diff-port-194829
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.970688925s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-130225 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.302614935s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gggwx" [eeb9523f-1b04-428b-9ddd-77613344fcba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1208 18:56:54.997617  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/ingress-addon-legacy-722179/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gggwx" [eeb9523f-1b04-428b-9ddd-77613344fcba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012152699s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bjffm" [e89f6a44-c823-452f-827c-4edcf2b44296] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bjffm" [e89f6a44-c823-452f-827c-4edcf2b44296] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00957598s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lgt6d" [10875759-3549-4109-b5b0-2f8e73d97ecb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018204319s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wszc7" [02fca6ac-6d88-4611-9f3f-0f5e52c46e31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1208 18:57:24.931872  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:24.937430  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:24.947685  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:24.968344  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:25.008953  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:25.089064  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:25.249876  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:25.570857  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:26.211432  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
E1208 18:57:27.492277  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/old-k8s-version-754199/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wszc7" [02fca6ac-6d88-4611-9f3f-0f5e52c46e31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009785394s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-130225 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-130225 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7bmvm" [a3f42823-e2a4-4f4c-96a7-b546b0ddbc46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1208 18:57:52.214563  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.219834  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.230061  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.250346  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.290887  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.371279  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.531662  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:52.852277  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
E1208 18:57:53.492655  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-7bmvm" [a3f42823-e2a4-4f4c-96a7-b546b0ddbc46] Running
E1208 18:57:57.333649  343628 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/no-preload-554591/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009160682s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-130225 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-130225 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (27/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-916149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-916149
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-130225 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-130225

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-130225"

                                                
                                                
----------------------- debugLogs end: kubenet-130225 [took: 4.126050059s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-130225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-130225
--- SKIP: TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-130225 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-130225" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17738-336823/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 08 Dec 2023 18:42:48 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-env-742678
contexts:
- context:
cluster: force-systemd-env-742678
extensions:
- extension:
last-update: Fri, 08 Dec 2023 18:42:48 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-env-742678
name: force-systemd-env-742678
current-context: force-systemd-env-742678
kind: Config
preferences: {}
users:
- name: force-systemd-env-742678
user:
client-certificate: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/force-systemd-env-742678/client.crt
client-key: /home/jenkins/minikube-integration/17738-336823/.minikube/profiles/force-systemd-env-742678/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-130225

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-130225" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-130225"

                                                
                                                
----------------------- debugLogs end: cilium-130225 [took: 4.40858926s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-130225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-130225
--- SKIP: TestNetworkPlugins/group/cilium (4.89s)

                                                
                                    
Copied to clipboard