Test Report: Docker_Linux_crio 17848

                    
                      4e03e3f64731b9a82b3398fd73787c019520d693:2023-12-21:32379
                    
                

Test fail (6/316)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.57
134 TestFunctional/parallel/ImageCommands/ImageBuild 6.98
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.94
217 TestMultiNode/serial/PingHostFrom2Pods 2.99
239 TestRunningBinaryUpgrade 69.36
265 TestStoppedBinaryUpgrade/Upgrade 107.19
x
+
TestAddons/parallel/Ingress (152.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-443778 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-443778 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-443778 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [30a2f515-e4d8-424e-a4b6-67d6deb7bf29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [30a2f515-e4d8-424e-a4b6-67d6deb7bf29] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003410595s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-443778 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.861618006s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-443778 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-443778 addons disable ingress-dns --alsologtostderr -v=1: (1.213054618s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-443778 addons disable ingress --alsologtostderr -v=1: (7.569084922s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-443778
helpers_test.go:235: (dbg) docker inspect addons-443778:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906",
	        "Created": "2023-12-21T18:05:09.619648653Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:05:09.897746074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aaeab328720c5f9c5998a41dcf23df3cc1d95a0c58c535e504f0d445f5dfad94",
	        "ResolvConfPath": "/var/lib/docker/containers/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906/hostname",
	        "HostsPath": "/var/lib/docker/containers/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906/hosts",
	        "LogPath": "/var/lib/docker/containers/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906-json.log",
	        "Name": "/addons-443778",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-443778:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-443778",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0fb3c0cc733028a1275e28e73f35775677c169eb42fca009f92d7b6accd437ae-init/diff:/var/lib/docker/overlay2/5f93c210e62b94f4976b2a81580f0bf0da95be40a907596ee84a499ee959f455/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0fb3c0cc733028a1275e28e73f35775677c169eb42fca009f92d7b6accd437ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0fb3c0cc733028a1275e28e73f35775677c169eb42fca009f92d7b6accd437ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0fb3c0cc733028a1275e28e73f35775677c169eb42fca009f92d7b6accd437ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-443778",
	                "Source": "/var/lib/docker/volumes/addons-443778/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-443778",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-443778",
	                "name.minikube.sigs.k8s.io": "addons-443778",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10a2e4103fb8e99665b38659e072f2b00356e07f394bc324d008c8fb2b55e267",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/10a2e4103fb8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-443778": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "08fa7e5d4c9c",
	                        "addons-443778"
	                    ],
	                    "NetworkID": "8fe18393074e82495d785d8dd89689d45c202c0342c17355ba330dc41617f58d",
	                    "EndpointID": "0ead89ccd1a2739d70aee14d992f123d49602ad19913fd8f9148b640462ad8f5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-443778 -n addons-443778
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-443778 logs -n 25: (1.126648224s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-664125                                                                     | download-only-664125   | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC | 21 Dec 23 18:04 UTC |
	| delete  | -p download-only-664125                                                                     | download-only-664125   | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC | 21 Dec 23 18:04 UTC |
	| start   | --download-only -p                                                                          | download-docker-939435 | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC |                     |
	|         | download-docker-939435                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-939435                                                                   | download-docker-939435 | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC | 21 Dec 23 18:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-832088   | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC |                     |
	|         | binary-mirror-832088                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44331                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-832088                                                                     | binary-mirror-832088   | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC | 21 Dec 23 18:04 UTC |
	| addons  | enable dashboard -p                                                                         | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC |                     |
	|         | addons-443778                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC |                     |
	|         | addons-443778                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-443778 --wait=true                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC | 21 Dec 23 18:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-443778 addons                                                                        | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | addons-443778                                                                               |                        |         |         |                     |                     |
	| addons  | addons-443778 addons disable                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-443778 ip                                                                            | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	| addons  | addons-443778 addons disable                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | addons-443778                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | -p addons-443778                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | -p addons-443778                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-443778 ssh curl -s                                                                   | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-443778 ssh cat                                                                       | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:07 UTC |
	|         | /opt/local-path-provisioner/pvc-4dcfabe1-8499-4626-b912-956875087aab_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-443778 addons disable                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:07 UTC | 21 Dec 23 18:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-443778 addons                                                                        | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:08 UTC | 21 Dec 23 18:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-443778 addons                                                                        | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:08 UTC | 21 Dec 23 18:08 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-443778 ip                                                                            | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:10 UTC | 21 Dec 23 18:10 UTC |
	| addons  | addons-443778 addons disable                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:10 UTC | 21 Dec 23 18:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-443778 addons disable                                                                | addons-443778          | jenkins | v1.32.0 | 21 Dec 23 18:10 UTC | 21 Dec 23 18:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:04:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:04:48.476988   17890 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:04:48.477287   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:04:48.477298   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:04:48.477305   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:04:48.477493   17890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:04:48.478123   17890 out.go:303] Setting JSON to false
	I1221 18:04:48.478921   17890 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2836,"bootTime":1703179053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:04:48.478991   17890 start.go:138] virtualization: kvm guest
	I1221 18:04:48.481055   17890 out.go:177] * [addons-443778] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:04:48.482628   17890 notify.go:220] Checking for updates...
	I1221 18:04:48.482637   17890 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:04:48.483988   17890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:04:48.485355   17890 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:04:48.486752   17890 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:04:48.488111   17890 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:04:48.489335   17890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:04:48.490793   17890 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:04:48.510719   17890 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:04:48.510814   17890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:04:48.560085   17890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:04:48.552406959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:04:48.560165   17890 docker.go:295] overlay module found
	I1221 18:04:48.562119   17890 out.go:177] * Using the docker driver based on user configuration
	I1221 18:04:48.563449   17890 start.go:298] selected driver: docker
	I1221 18:04:48.563459   17890 start.go:902] validating driver "docker" against <nil>
	I1221 18:04:48.563469   17890 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:04:48.564183   17890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:04:48.615135   17890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:04:48.60782739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:04:48.615277   17890 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:04:48.615511   17890 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 18:04:48.617205   17890 out.go:177] * Using Docker driver with root privileges
	I1221 18:04:48.618617   17890 cni.go:84] Creating CNI manager for ""
	I1221 18:04:48.618639   17890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:04:48.618649   17890 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 18:04:48.618663   17890 start_flags.go:323] config:
	{Name:addons-443778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-443778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:04:48.620029   17890 out.go:177] * Starting control plane node addons-443778 in cluster addons-443778
	I1221 18:04:48.621171   17890 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:04:48.622589   17890 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:04:48.623828   17890 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:04:48.623862   17890 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1221 18:04:48.623873   17890 cache.go:56] Caching tarball of preloaded images
	I1221 18:04:48.623924   17890 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:04:48.623974   17890 preload.go:174] Found /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 18:04:48.623985   17890 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1221 18:04:48.624300   17890 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/config.json ...
	I1221 18:04:48.624320   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/config.json: {Name:mkd0c9faad4f5c61530803aa3ccb3d6e3a37a0ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:04:48.638767   17890 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:04:48.638871   17890 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:04:48.638886   17890 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:04:48.638890   17890 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:04:48.638897   17890 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:04:48.638907   17890 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1221 18:05:00.050623   17890 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1221 18:05:00.050658   17890 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:05:00.050699   17890 start.go:365] acquiring machines lock for addons-443778: {Name:mk9a7b3de6d9d2194b9452b8517f44b364318adb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:05:00.050784   17890 start.go:369] acquired machines lock for "addons-443778" in 69.296µs
	I1221 18:05:00.050807   17890 start.go:93] Provisioning new machine with config: &{Name:addons-443778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-443778 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:05:00.050883   17890 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:05:00.052750   17890 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1221 18:05:00.052977   17890 start.go:159] libmachine.API.Create for "addons-443778" (driver="docker")
	I1221 18:05:00.053007   17890 client.go:168] LocalClient.Create starting
	I1221 18:05:00.053089   17890 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem
	I1221 18:05:00.482616   17890 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem
	I1221 18:05:00.605372   17890 cli_runner.go:164] Run: docker network inspect addons-443778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:05:00.619543   17890 cli_runner.go:211] docker network inspect addons-443778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:05:00.619606   17890 network_create.go:281] running [docker network inspect addons-443778] to gather additional debugging logs...
	I1221 18:05:00.619623   17890 cli_runner.go:164] Run: docker network inspect addons-443778
	W1221 18:05:00.633729   17890 cli_runner.go:211] docker network inspect addons-443778 returned with exit code 1
	I1221 18:05:00.633759   17890 network_create.go:284] error running [docker network inspect addons-443778]: docker network inspect addons-443778: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-443778 not found
	I1221 18:05:00.633774   17890 network_create.go:286] output of [docker network inspect addons-443778]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-443778 not found
	
	** /stderr **
	I1221 18:05:00.633872   17890 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:05:00.648389   17890 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00280b720}
	I1221 18:05:00.648428   17890 network_create.go:124] attempt to create docker network addons-443778 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 18:05:00.648494   17890 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-443778 addons-443778
	I1221 18:05:00.698793   17890 network_create.go:108] docker network addons-443778 192.168.49.0/24 created
	I1221 18:05:00.698823   17890 kic.go:121] calculated static IP "192.168.49.2" for the "addons-443778" container
	I1221 18:05:00.698912   17890 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:05:00.713921   17890 cli_runner.go:164] Run: docker volume create addons-443778 --label name.minikube.sigs.k8s.io=addons-443778 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:05:00.729494   17890 oci.go:103] Successfully created a docker volume addons-443778
	I1221 18:05:00.729563   17890 cli_runner.go:164] Run: docker run --rm --name addons-443778-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-443778 --entrypoint /usr/bin/test -v addons-443778:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:05:04.460829   17890 cli_runner.go:217] Completed: docker run --rm --name addons-443778-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-443778 --entrypoint /usr/bin/test -v addons-443778:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (3.731214131s)
	I1221 18:05:04.460852   17890 oci.go:107] Successfully prepared a docker volume addons-443778
	I1221 18:05:04.460890   17890 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:05:04.460912   17890 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:05:04.460970   17890 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-443778:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:05:09.554150   17890 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-443778:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.093132101s)
	I1221 18:05:09.554180   17890 kic.go:203] duration metric: took 5.093267 seconds to extract preloaded images to volume
	W1221 18:05:09.554311   17890 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:05:09.554395   17890 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:05:09.605730   17890 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-443778 --name addons-443778 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-443778 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-443778 --network addons-443778 --ip 192.168.49.2 --volume addons-443778:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:05:09.904683   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Running}}
	I1221 18:05:09.921883   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:09.937958   17890 cli_runner.go:164] Run: docker exec addons-443778 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:05:10.006071   17890 oci.go:144] the created container "addons-443778" has a running status.
	I1221 18:05:10.006101   17890 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa...
	I1221 18:05:10.190304   17890 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:05:10.208519   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:10.228043   17890 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:05:10.228068   17890 kic_runner.go:114] Args: [docker exec --privileged addons-443778 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:05:10.309504   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:10.338698   17890 machine.go:88] provisioning docker machine ...
	I1221 18:05:10.338741   17890 ubuntu.go:169] provisioning hostname "addons-443778"
	I1221 18:05:10.338796   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:10.354413   17890 main.go:141] libmachine: Using SSH client type: native
	I1221 18:05:10.354740   17890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:05:10.354759   17890 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-443778 && echo "addons-443778" | sudo tee /etc/hostname
	I1221 18:05:10.551557   17890 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-443778
	
	I1221 18:05:10.551635   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:10.568397   17890 main.go:141] libmachine: Using SSH client type: native
	I1221 18:05:10.568730   17890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:05:10.568750   17890 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-443778' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-443778/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-443778' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:05:10.692633   17890 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:05:10.692656   17890 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:05:10.692686   17890 ubuntu.go:177] setting up certificates
	I1221 18:05:10.692700   17890 provision.go:83] configureAuth start
	I1221 18:05:10.692756   17890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-443778
	I1221 18:05:10.707898   17890 provision.go:138] copyHostCerts
	I1221 18:05:10.707996   17890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:05:10.708136   17890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:05:10.708213   17890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:05:10.708270   17890 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.addons-443778 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-443778]
	I1221 18:05:10.978219   17890 provision.go:172] copyRemoteCerts
	I1221 18:05:10.978278   17890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:05:10.978309   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:10.995513   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:11.084686   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:05:11.104600   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1221 18:05:11.123563   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:05:11.142068   17890 provision.go:86] duration metric: configureAuth took 449.353529ms
	I1221 18:05:11.142086   17890 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:05:11.142222   17890 config.go:182] Loaded profile config "addons-443778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:05:11.142303   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:11.160661   17890 main.go:141] libmachine: Using SSH client type: native
	I1221 18:05:11.160957   17890 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1221 18:05:11.160976   17890 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:05:11.347694   17890 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:05:11.347725   17890 machine.go:91] provisioned docker machine in 1.009000508s
	I1221 18:05:11.347736   17890 client.go:171] LocalClient.Create took 11.294721209s
	I1221 18:05:11.347755   17890 start.go:167] duration metric: libmachine.API.Create for "addons-443778" took 11.294777954s
	I1221 18:05:11.347764   17890 start.go:300] post-start starting for "addons-443778" (driver="docker")
	I1221 18:05:11.347777   17890 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:05:11.347835   17890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:05:11.347877   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:11.363682   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:11.444802   17890 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:05:11.447567   17890 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:05:11.447598   17890 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:05:11.447608   17890 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:05:11.447614   17890 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:05:11.447624   17890 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:05:11.447684   17890 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:05:11.447710   17890 start.go:303] post-start completed in 99.940044ms
	I1221 18:05:11.448025   17890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-443778
	I1221 18:05:11.463752   17890 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/config.json ...
	I1221 18:05:11.463960   17890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:05:11.463994   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:11.479404   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:11.557302   17890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:05:11.560811   17890 start.go:128] duration metric: createHost completed in 11.509915327s
	I1221 18:05:11.560829   17890 start.go:83] releasing machines lock for "addons-443778", held for 11.510033901s
	I1221 18:05:11.560884   17890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-443778
	I1221 18:05:11.575737   17890 ssh_runner.go:195] Run: cat /version.json
	I1221 18:05:11.575778   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:11.575822   17890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:05:11.575882   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:11.593121   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:11.596223   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:11.759879   17890 ssh_runner.go:195] Run: systemctl --version
	I1221 18:05:11.763540   17890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:05:11.897023   17890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:05:11.900874   17890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:05:11.916882   17890 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:05:11.916966   17890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:05:11.940657   17890 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:05:11.940682   17890 start.go:475] detecting cgroup driver to use...
	I1221 18:05:11.940713   17890 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:05:11.940769   17890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:05:11.952778   17890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:05:11.961685   17890 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:05:11.961723   17890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:05:11.972505   17890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:05:11.983688   17890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 18:05:12.052196   17890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:05:12.127571   17890 docker.go:219] disabling docker service ...
	I1221 18:05:12.127638   17890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:05:12.143031   17890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:05:12.152143   17890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:05:12.224479   17890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:05:12.296418   17890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:05:12.305793   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:05:12.318840   17890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1221 18:05:12.318886   17890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:05:12.326817   17890 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 18:05:12.326857   17890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:05:12.334680   17890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:05:12.342303   17890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:05:12.349933   17890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:05:12.357001   17890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:05:12.363514   17890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:05:12.370136   17890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:05:12.439388   17890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 18:05:12.538516   17890 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 18:05:12.538587   17890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 18:05:12.541558   17890 start.go:543] Will wait 60s for crictl version
	I1221 18:05:12.541642   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:05:12.544259   17890 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:05:12.574162   17890 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1221 18:05:12.574241   17890 ssh_runner.go:195] Run: crio --version
	I1221 18:05:12.605197   17890 ssh_runner.go:195] Run: crio --version
	I1221 18:05:12.638168   17890 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1221 18:05:12.639573   17890 cli_runner.go:164] Run: docker network inspect addons-443778 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:05:12.654363   17890 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 18:05:12.657463   17890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:05:12.666462   17890 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:05:12.666507   17890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:05:12.716229   17890 crio.go:496] all images are preloaded for cri-o runtime.
	I1221 18:05:12.716251   17890 crio.go:415] Images already preloaded, skipping extraction
	I1221 18:05:12.716290   17890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:05:12.745515   17890 crio.go:496] all images are preloaded for cri-o runtime.
	I1221 18:05:12.745539   17890 cache_images.go:84] Images are preloaded, skipping loading
	I1221 18:05:12.745610   17890 ssh_runner.go:195] Run: crio config
	I1221 18:05:12.784115   17890 cni.go:84] Creating CNI manager for ""
	I1221 18:05:12.784133   17890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:05:12.784148   17890 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:05:12.784165   17890 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-443778 NodeName:addons-443778 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 18:05:12.784272   17890 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-443778"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:05:12.784326   17890 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-443778 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-443778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:05:12.784369   17890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1221 18:05:12.791708   17890 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:05:12.791758   17890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:05:12.798595   17890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1221 18:05:12.812866   17890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 18:05:12.826902   17890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1221 18:05:12.841165   17890 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:05:12.843833   17890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:05:12.852299   17890 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778 for IP: 192.168.49.2
	I1221 18:05:12.852335   17890 certs.go:190] acquiring lock for shared ca certs: {Name:mk1a19dbb52a881fd398c5196f3505713dce7712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:12.852457   17890 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key
	I1221 18:05:13.122346   17890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt ...
	I1221 18:05:13.122372   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt: {Name:mk343a280055ce0160f002a7784268f261842e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.122530   17890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key ...
	I1221 18:05:13.122541   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key: {Name:mkf791f12f203db1406c729cc63e9176be98f506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.122610   17890 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key
	I1221 18:05:13.512576   17890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt ...
	I1221 18:05:13.512604   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt: {Name:mk0848666dac2b416ebd2b2029b4586890fefa50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.512753   17890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key ...
	I1221 18:05:13.512763   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key: {Name:mk47b36e2b95b474ef66967510fbd9c13d8958e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.512856   17890 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.key
	I1221 18:05:13.512869   17890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt with IP's: []
	I1221 18:05:13.734503   17890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt ...
	I1221 18:05:13.734530   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: {Name:mk419eb28a4004a15b081a84df9cf73510a67ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.734670   17890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.key ...
	I1221 18:05:13.734679   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.key: {Name:mkbef7a43327dbd483445b29e710fabc89eb59e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.734742   17890 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key.dd3b5fb2
	I1221 18:05:13.734759   17890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:05:13.849723   17890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt.dd3b5fb2 ...
	I1221 18:05:13.849753   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt.dd3b5fb2: {Name:mk18fd225c19a3d2c32c1c0b5bcd37c57c4f3a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.849902   17890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key.dd3b5fb2 ...
	I1221 18:05:13.849914   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key.dd3b5fb2: {Name:mk4f60cfa82275b029e78563b3c45d2e63154bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:13.849982   17890 certs.go:337] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt
	I1221 18:05:13.850046   17890 certs.go:341] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key
	I1221 18:05:13.850086   17890 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.key
	I1221 18:05:13.850102   17890 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.crt with IP's: []
	I1221 18:05:14.190830   17890 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.crt ...
	I1221 18:05:14.190859   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.crt: {Name:mkc6198c7e7584c1189f473dadbaf375997eb0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:14.191012   17890 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.key ...
	I1221 18:05:14.191022   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.key: {Name:mkce48c2db7d29ca3935a06cb2c272bce9395371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:14.191224   17890 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem (1679 bytes)
	I1221 18:05:14.191257   17890 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem (1078 bytes)
	I1221 18:05:14.191279   17890 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:05:14.191304   17890 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem (1679 bytes)
	I1221 18:05:14.191852   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:05:14.212113   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 18:05:14.231426   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:05:14.250424   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 18:05:14.269954   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:05:14.289819   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 18:05:14.309281   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:05:14.328094   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:05:14.347211   17890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:05:14.366388   17890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:05:14.380781   17890 ssh_runner.go:195] Run: openssl version
	I1221 18:05:14.385350   17890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:05:14.393002   17890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:05:14.395682   17890 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:05:14.395734   17890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:05:14.401493   17890 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:05:14.408745   17890 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:05:14.411325   17890 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:05:14.411368   17890 kubeadm.go:404] StartCluster: {Name:addons-443778 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-443778 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:05:14.411433   17890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 18:05:14.411469   17890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 18:05:14.441543   17890 cri.go:89] found id: ""
	I1221 18:05:14.441608   17890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:05:14.448982   17890 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:05:14.455979   17890 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:05:14.456036   17890 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:05:14.463050   17890 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:05:14.463084   17890 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:05:14.536034   17890 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1221 18:05:14.594298   17890 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:05:23.902474   17890 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1221 18:05:23.902596   17890 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:05:23.902725   17890 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:05:23.902818   17890 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1221 18:05:23.902867   17890 kubeadm.go:322] OS: Linux
	I1221 18:05:23.902931   17890 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:05:23.902995   17890 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:05:23.903071   17890 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:05:23.903152   17890 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:05:23.903244   17890 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:05:23.903328   17890 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:05:23.903389   17890 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1221 18:05:23.903447   17890 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1221 18:05:23.903507   17890 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1221 18:05:23.903587   17890 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:05:23.903710   17890 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:05:23.903843   17890 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:05:23.903985   17890 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:05:23.905301   17890 out.go:204]   - Generating certificates and keys ...
	I1221 18:05:23.905399   17890 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:05:23.905485   17890 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:05:23.905570   17890 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:05:23.905638   17890 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:05:23.905723   17890 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:05:23.905790   17890 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:05:23.905894   17890 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:05:23.906078   17890 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-443778 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:05:23.906148   17890 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:05:23.906288   17890 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-443778 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:05:23.906368   17890 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:05:23.906445   17890 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:05:23.906498   17890 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:05:23.906590   17890 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:05:23.906668   17890 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:05:23.906741   17890 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:05:23.906838   17890 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:05:23.906924   17890 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:05:23.907051   17890 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:05:23.907157   17890 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:05:23.908699   17890 out.go:204]   - Booting up control plane ...
	I1221 18:05:23.908793   17890 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:05:23.908870   17890 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:05:23.908964   17890 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:05:23.909118   17890 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:05:23.909278   17890 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:05:23.909329   17890 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:05:23.909466   17890 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:05:23.909556   17890 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001918 seconds
	I1221 18:05:23.909689   17890 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:05:23.909852   17890 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:05:23.909930   17890 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:05:23.910177   17890 kubeadm.go:322] [mark-control-plane] Marking the node addons-443778 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 18:05:23.910238   17890 kubeadm.go:322] [bootstrap-token] Using token: t4knz8.rxxtvsvt2bvj3amt
	I1221 18:05:23.911580   17890 out.go:204]   - Configuring RBAC rules ...
	I1221 18:05:23.911727   17890 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:05:23.911852   17890 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:05:23.912014   17890 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:05:23.912208   17890 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:05:23.912348   17890 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:05:23.912429   17890 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:05:23.912535   17890 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:05:23.912596   17890 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:05:23.912653   17890 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:05:23.912660   17890 kubeadm.go:322] 
	I1221 18:05:23.912720   17890 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:05:23.912737   17890 kubeadm.go:322] 
	I1221 18:05:23.912844   17890 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:05:23.912851   17890 kubeadm.go:322] 
	I1221 18:05:23.912878   17890 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:05:23.912965   17890 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:05:23.913039   17890 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:05:23.913050   17890 kubeadm.go:322] 
	I1221 18:05:23.913098   17890 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1221 18:05:23.913110   17890 kubeadm.go:322] 
	I1221 18:05:23.913174   17890 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 18:05:23.913180   17890 kubeadm.go:322] 
	I1221 18:05:23.913241   17890 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:05:23.913343   17890 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:05:23.913454   17890 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:05:23.913464   17890 kubeadm.go:322] 
	I1221 18:05:23.913575   17890 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:05:23.913650   17890 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:05:23.913656   17890 kubeadm.go:322] 
	I1221 18:05:23.913722   17890 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token t4knz8.rxxtvsvt2bvj3amt \
	I1221 18:05:23.913807   17890 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b \
	I1221 18:05:23.913844   17890 kubeadm.go:322] 	--control-plane 
	I1221 18:05:23.913856   17890 kubeadm.go:322] 
	I1221 18:05:23.913994   17890 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:05:23.914009   17890 kubeadm.go:322] 
	I1221 18:05:23.914105   17890 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token t4knz8.rxxtvsvt2bvj3amt \
	I1221 18:05:23.914218   17890 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b 
	I1221 18:05:23.914229   17890 cni.go:84] Creating CNI manager for ""
	I1221 18:05:23.914234   17890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:05:23.915712   17890 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1221 18:05:23.916936   17890 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 18:05:23.920351   17890 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1221 18:05:23.920370   17890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1221 18:05:23.935513   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 18:05:24.560955   17890 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:05:24.561037   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:24.561048   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=addons-443778 minikube.k8s.io/updated_at=2023_12_21T18_05_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:24.567250   17890 ops.go:34] apiserver oom_adj: -16
	I1221 18:05:24.631042   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:25.131764   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:25.631299   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:26.132093   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:26.631358   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:27.132059   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:27.631420   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:28.131774   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:28.632063   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:29.131960   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:29.631759   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:30.131155   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:30.631113   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:31.131412   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:31.631970   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:32.131972   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:32.631967   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:33.131779   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:33.631688   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:34.131988   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:34.631231   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:35.132041   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:35.631826   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:36.131484   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:36.632024   17890 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:05:36.696195   17890 kubeadm.go:1088] duration metric: took 12.135208102s to wait for elevateKubeSystemPrivileges.
	I1221 18:05:36.696236   17890 kubeadm.go:406] StartCluster complete in 22.284870442s
	I1221 18:05:36.696255   17890 settings.go:142] acquiring lock: {Name:mk8e49e823ae84efe44355981045de15cdb79660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:36.696374   17890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:05:36.696742   17890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/kubeconfig: {Name:mk377070c6d3dd4bc3f11638f8c446f488cf8c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:05:36.696925   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:05:36.697015   17890 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1221 18:05:36.697092   17890 addons.go:69] Setting yakd=true in profile "addons-443778"
	I1221 18:05:36.697105   17890 addons.go:69] Setting ingress-dns=true in profile "addons-443778"
	I1221 18:05:36.697118   17890 addons.go:69] Setting registry=true in profile "addons-443778"
	I1221 18:05:36.697132   17890 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-443778"
	I1221 18:05:36.697114   17890 addons.go:237] Setting addon yakd=true in "addons-443778"
	I1221 18:05:36.697145   17890 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-443778"
	I1221 18:05:36.697150   17890 addons.go:69] Setting metrics-server=true in profile "addons-443778"
	I1221 18:05:36.697150   17890 addons.go:69] Setting inspektor-gadget=true in profile "addons-443778"
	I1221 18:05:36.697171   17890 addons.go:69] Setting default-storageclass=true in profile "addons-443778"
	I1221 18:05:36.697173   17890 config.go:182] Loaded profile config "addons-443778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:05:36.697176   17890 addons.go:237] Setting addon inspektor-gadget=true in "addons-443778"
	I1221 18:05:36.697172   17890 addons.go:69] Setting volumesnapshots=true in profile "addons-443778"
	I1221 18:05:36.697192   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697193   17890 addons.go:69] Setting cloud-spanner=true in profile "addons-443778"
	I1221 18:05:36.697192   17890 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-443778"
	I1221 18:05:36.697205   17890 addons.go:237] Setting addon cloud-spanner=true in "addons-443778"
	I1221 18:05:36.697206   17890 addons.go:237] Setting addon volumesnapshots=true in "addons-443778"
	I1221 18:05:36.697221   17890 addons.go:69] Setting helm-tiller=true in profile "addons-443778"
	I1221 18:05:36.697224   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697247   17890 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-443778"
	I1221 18:05:36.697250   17890 addons.go:237] Setting addon helm-tiller=true in "addons-443778"
	I1221 18:05:36.697255   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697260   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697281   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697288   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.697184   17890 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-443778"
	I1221 18:05:36.697562   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697704   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697704   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697712   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697723   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697748   17890 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-443778"
	I1221 18:05:36.697765   17890 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-443778"
	I1221 18:05:36.697801   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.698053   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697177   17890 addons.go:237] Setting addon metrics-server=true in "addons-443778"
	I1221 18:05:36.698178   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.698199   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.698330   17890 addons.go:69] Setting gcp-auth=true in profile "addons-443778"
	I1221 18:05:36.698359   17890 mustload.go:65] Loading cluster: addons-443778
	I1221 18:05:36.698580   17890 config.go:182] Loaded profile config "addons-443778": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:05:36.698680   17890 addons.go:69] Setting ingress=true in profile "addons-443778"
	I1221 18:05:36.698701   17890 addons.go:237] Setting addon ingress=true in "addons-443778"
	I1221 18:05:36.698710   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.698757   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.698817   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.699187   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697133   17890 addons.go:237] Setting addon ingress-dns=true in "addons-443778"
	I1221 18:05:36.699919   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.700328   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697123   17890 addons.go:69] Setting storage-provisioner=true in profile "addons-443778"
	I1221 18:05:36.705150   17890 addons.go:237] Setting addon storage-provisioner=true in "addons-443778"
	I1221 18:05:36.705256   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.705750   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697562   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697713   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.697134   17890 addons.go:237] Setting addon registry=true in "addons-443778"
	I1221 18:05:36.720515   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.721001   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.756633   17890 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1221 18:05:36.758182   17890 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1221 18:05:36.758211   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1221 18:05:36.758280   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.763541   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.768399   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1221 18:05:36.770132   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1221 18:05:36.768190   17890 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-443778"
	I1221 18:05:36.771578   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.772066   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.775739   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1221 18:05:36.774394   17890 addons.go:237] Setting addon default-storageclass=true in "addons-443778"
	I1221 18:05:36.778278   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1221 18:05:36.777090   17890 out.go:177]   - Using image docker.io/registry:2.8.3
	I1221 18:05:36.777125   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:36.780668   17890 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1221 18:05:36.781210   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:36.783068   17890 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1221 18:05:36.781872   17890 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1221 18:05:36.781879   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1221 18:05:36.784463   17890 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1221 18:05:36.785987   17890 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1221 18:05:36.788193   17890 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 18:05:36.788209   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1221 18:05:36.788255   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.790472   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1221 18:05:36.786061   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1221 18:05:36.791799   17890 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1221 18:05:36.790557   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.788084   17890 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1221 18:05:36.788094   17890 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1221 18:05:36.788102   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1221 18:05:36.786040   17890 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1221 18:05:36.786135   17890 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1221 18:05:36.793256   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1221 18:05:36.793276   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1221 18:05:36.793546   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.796443   17890 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:05:36.796882   17890 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 18:05:36.796484   17890 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1221 18:05:36.796521   17890 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1221 18:05:36.796567   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.796941   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.796973   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1221 18:05:36.803288   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1221 18:05:36.801390   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1221 18:05:36.801400   17890 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:05:36.801408   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1221 18:05:36.801742   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1221 18:05:36.804790   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.804859   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.806735   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.806796   17890 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1221 18:05:36.808365   17890 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1221 18:05:36.809722   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1221 18:05:36.809748   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1221 18:05:36.811093   17890 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:05:36.808561   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1221 18:05:36.808670   17890 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 18:05:36.809793   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.812607   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.812852   17890 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:05:36.812868   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:05:36.812931   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.813091   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1221 18:05:36.813131   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.816917   17890 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1221 18:05:36.818285   17890 out.go:177]   - Using image docker.io/busybox:stable
	I1221 18:05:36.819575   17890 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 18:05:36.819591   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1221 18:05:36.819633   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.824525   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.829094   17890 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:05:36.829121   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:05:36.829174   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:36.838448   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.851177   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.857383   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.857571   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.859223   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.862064   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.862840   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.865134   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.872955   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.873472   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.875021   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:36.875323   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	W1221 18:05:36.897722   17890 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1221 18:05:36.897762   17890 retry.go:31] will retry after 171.48165ms: ssh: handshake failed: EOF
	I1221 18:05:36.900417   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:05:37.088195   17890 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1221 18:05:37.088224   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1221 18:05:37.187355   17890 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1221 18:05:37.187376   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1221 18:05:37.187583   17890 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1221 18:05:37.187619   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1221 18:05:37.190483   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1221 18:05:37.190517   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1221 18:05:37.205569   17890 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-443778" context rescaled to 1 replicas
	I1221 18:05:37.205616   17890 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:05:37.207178   17890 out.go:177] * Verifying Kubernetes components...
	I1221 18:05:37.208452   17890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:05:37.297608   17890 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1221 18:05:37.297646   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1221 18:05:37.302886   17890 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1221 18:05:37.302910   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1221 18:05:37.305404   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1221 18:05:37.305761   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1221 18:05:37.307259   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1221 18:05:37.307281   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1221 18:05:37.386768   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:05:37.386814   17890 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1221 18:05:37.386912   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1221 18:05:37.387363   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1221 18:05:37.389258   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1221 18:05:37.389655   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1221 18:05:37.389862   17890 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1221 18:05:37.389878   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1221 18:05:37.399439   17890 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1221 18:05:37.399500   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1221 18:05:37.487791   17890 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1221 18:05:37.487821   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1221 18:05:37.489870   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1221 18:05:37.501802   17890 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1221 18:05:37.501879   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1221 18:05:37.509387   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1221 18:05:37.509411   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1221 18:05:37.589647   17890 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1221 18:05:37.589721   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1221 18:05:37.596137   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:05:37.601519   17890 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1221 18:05:37.601543   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1221 18:05:37.687151   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1221 18:05:37.694413   17890 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 18:05:37.694446   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1221 18:05:37.787298   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1221 18:05:37.787383   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1221 18:05:37.804374   17890 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1221 18:05:37.804452   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1221 18:05:38.086901   17890 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1221 18:05:38.086975   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1221 18:05:38.101113   17890 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1221 18:05:38.101193   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1221 18:05:38.189964   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1221 18:05:38.386742   17890 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1221 18:05:38.386773   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1221 18:05:38.495985   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1221 18:05:38.496024   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1221 18:05:38.499205   17890 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1221 18:05:38.499226   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1221 18:05:38.686843   17890 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1221 18:05:38.686883   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1221 18:05:38.794341   17890 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.893881813s)
	I1221 18:05:38.794380   17890 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 18:05:38.794418   17890 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.585937585s)
	I1221 18:05:38.795427   17890 node_ready.go:35] waiting up to 6m0s for node "addons-443778" to be "Ready" ...
	I1221 18:05:38.885782   17890 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1221 18:05:38.885852   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1221 18:05:38.886576   17890 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:05:38.886625   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1221 18:05:38.987050   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1221 18:05:38.987140   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1221 18:05:39.088791   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1221 18:05:39.292374   17890 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1221 18:05:39.292408   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1221 18:05:39.294560   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:05:39.398580   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1221 18:05:39.398612   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1221 18:05:39.607882   17890 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1221 18:05:39.607908   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1221 18:05:39.905926   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1221 18:05:39.985940   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1221 18:05:39.985973   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1221 18:05:40.185790   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1221 18:05:40.185817   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1221 18:05:40.585918   17890 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 18:05:40.586007   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1221 18:05:40.892340   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:41.086704   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1221 18:05:41.605098   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.299654156s)
	I1221 18:05:41.605191   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.29940919s)
	I1221 18:05:41.605269   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.218416646s)
	I1221 18:05:41.607445   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.220050326s)
	I1221 18:05:41.607534   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.217855985s)
	I1221 18:05:43.200112   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.810792139s)
	I1221 18:05:43.200158   17890 addons.go:473] Verifying addon ingress=true in "addons-443778"
	I1221 18:05:43.200213   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.710308807s)
	I1221 18:05:43.200275   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.604104992s)
	I1221 18:05:43.200314   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.513123467s)
	I1221 18:05:43.200335   17890 addons.go:473] Verifying addon registry=true in "addons-443778"
	I1221 18:05:43.201894   17890 out.go:177] * Verifying ingress addon...
	I1221 18:05:43.200381   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.010325287s)
	I1221 18:05:43.200426   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.111551471s)
	I1221 18:05:43.200549   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.905957913s)
	I1221 18:05:43.200634   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.294670305s)
	I1221 18:05:43.203415   17890 addons.go:473] Verifying addon metrics-server=true in "addons-443778"
	I1221 18:05:43.203983   17890 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1221 18:05:43.204667   17890 out.go:177] * Verifying registry addon...
	W1221 18:05:43.204700   17890 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 18:05:43.206116   17890 retry.go:31] will retry after 236.837302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1221 18:05:43.207381   17890 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-443778 service yakd-dashboard -n yakd-dashboard
	
	
	I1221 18:05:43.206905   17890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1221 18:05:43.209767   17890 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1221 18:05:43.209786   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:43.213313   17890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 18:05:43.213329   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:43.298806   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:43.443308   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1221 18:05:43.570139   17890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1221 18:05:43.570207   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:43.585691   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:43.702538   17890 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1221 18:05:43.708833   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:43.711731   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:43.720173   17890 addons.go:237] Setting addon gcp-auth=true in "addons-443778"
	I1221 18:05:43.720218   17890 host.go:66] Checking if "addons-443778" exists ...
	I1221 18:05:43.720695   17890 cli_runner.go:164] Run: docker container inspect addons-443778 --format={{.State.Status}}
	I1221 18:05:43.745585   17890 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1221 18:05:43.745641   17890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-443778
	I1221 18:05:43.761347   17890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/addons-443778/id_rsa Username:docker}
	I1221 18:05:44.104218   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.017410957s)
	I1221 18:05:44.104267   17890 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-443778"
	I1221 18:05:44.105974   17890 out.go:177] * Verifying csi-hostpath-driver addon...
	I1221 18:05:44.107686   17890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1221 18:05:44.110675   17890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 18:05:44.110697   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:44.208867   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:44.212015   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:44.444577   17890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.001220386s)
	I1221 18:05:44.446429   17890 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1221 18:05:44.447772   17890 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1221 18:05:44.449025   17890 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1221 18:05:44.449039   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1221 18:05:44.464427   17890 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1221 18:05:44.464445   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1221 18:05:44.479110   17890 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 18:05:44.479128   17890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1221 18:05:44.494185   17890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1221 18:05:44.611999   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:44.709222   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:44.712219   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:44.907318   17890 addons.go:473] Verifying addon gcp-auth=true in "addons-443778"
	I1221 18:05:44.908772   17890 out.go:177] * Verifying gcp-auth addon...
	I1221 18:05:44.911027   17890 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1221 18:05:44.987299   17890 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1221 18:05:44.987325   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:45.111755   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:45.209072   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:45.212541   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:45.299227   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:45.414540   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:45.612348   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:45.708665   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:45.712867   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:45.914990   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:46.112957   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:46.209268   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:46.215239   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:46.414921   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:46.612124   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:46.709426   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:46.712960   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:46.915174   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:47.112333   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:47.208903   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:47.212120   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:47.414693   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:47.612362   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:47.708713   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:47.712017   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:47.798763   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:47.914608   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:48.112618   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:48.208451   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:48.212553   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:48.414116   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:48.612502   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:48.708640   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:48.712077   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:48.914598   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:49.112118   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:49.208885   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:49.212222   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:49.414329   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:49.611793   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:49.708649   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:49.711875   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:49.914163   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:50.111628   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:50.208779   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:50.211606   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:50.298863   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:50.414232   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:50.611695   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:50.708637   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:50.712029   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:50.914110   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:51.112635   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:51.208401   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:51.211470   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:51.414460   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:51.611779   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:51.708521   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:51.711625   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:51.914437   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:52.111840   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:52.208590   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:52.211725   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:52.414290   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:52.611603   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:52.708552   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:52.711599   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:52.799084   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:52.914694   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:53.112273   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:53.208374   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:53.211414   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:53.414302   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:53.611784   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:53.708807   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:53.711649   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:53.914586   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:54.111877   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:54.208826   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:54.211392   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:54.414112   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:54.612040   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:54.708586   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:54.711655   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:54.799126   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:54.914704   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:55.111916   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:55.209119   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:55.211842   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:55.414549   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:55.611798   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:55.708914   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:55.711607   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:55.914716   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:56.112099   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:56.207989   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:56.212288   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:56.413809   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:56.611333   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:56.708481   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:56.712291   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:56.914282   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:57.111685   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:57.208729   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:57.211462   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:57.298656   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:57.414016   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:57.611379   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:57.708305   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:57.712397   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:57.914535   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:58.111969   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:58.208842   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:58.211670   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:58.414511   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:58.612191   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:58.707902   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:58.712223   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:58.914386   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:59.111880   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:59.208717   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:59.211455   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:59.298762   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:05:59.414384   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:05:59.613542   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:05:59.708371   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:05:59.711541   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:05:59.914604   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:00.112089   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:00.208068   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:00.212417   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:00.414158   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:00.611426   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:00.708363   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:00.711538   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:00.914262   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:01.111633   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:01.208712   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:01.211341   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:01.414353   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:01.611676   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:01.708665   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:01.711782   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:01.798431   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:06:01.913630   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:02.112036   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:02.207952   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:02.212514   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:02.414242   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:02.611632   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:02.708628   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:02.711876   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:02.914166   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:03.111333   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:03.208194   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:03.212426   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:03.414278   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:03.611763   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:03.708734   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:03.711561   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:03.799376   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:06:03.914406   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:04.111745   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:04.208571   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:04.211563   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:04.414213   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:04.611769   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:04.708709   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:04.711577   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:04.914746   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:05.112391   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:05.208433   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:05.211657   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:05.414366   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:05.611759   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:05.708698   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:05.711914   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:05.913918   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:06.111387   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:06.208254   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:06.212521   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:06.298927   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:06:06.414487   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:06.611928   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:06.708278   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:06.712844   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:06.916271   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:07.111498   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:07.208805   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:07.211639   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:07.413959   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:07.611408   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:07.708244   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:07.712135   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:07.914177   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:08.111275   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:08.208265   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:08.212480   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:08.414290   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:08.611954   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:08.708844   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:08.711372   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:08.799063   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:06:08.914611   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:09.111951   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:09.208036   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:09.212280   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:09.414257   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:09.611543   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:09.708447   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:09.711830   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:09.913983   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:10.111325   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:10.208255   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:10.212645   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:10.414564   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:10.611987   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:10.707982   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:10.712369   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:10.914278   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:11.111595   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:11.208475   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:11.211876   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:11.298128   17890 node_ready.go:58] node "addons-443778" has status "Ready":"False"
	I1221 18:06:11.414839   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:11.611609   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:11.710741   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:11.713014   17890 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1221 18:06:11.713040   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:11.799221   17890 node_ready.go:49] node "addons-443778" has status "Ready":"True"
	I1221 18:06:11.799249   17890 node_ready.go:38] duration metric: took 33.003751975s waiting for node "addons-443778" to be "Ready" ...
	I1221 18:06:11.799261   17890 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:06:11.810596   17890 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4cr6h" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:11.914484   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:12.114604   17890 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1221 18:06:12.114633   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:12.208742   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:12.212725   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:12.413988   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:12.612552   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:12.707977   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:12.712572   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:12.914112   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:13.112648   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:13.208991   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:13.212337   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:13.315850   17890 pod_ready.go:92] pod "coredns-5dd5756b68-4cr6h" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.315869   17890 pod_ready.go:81] duration metric: took 1.505237771s waiting for pod "coredns-5dd5756b68-4cr6h" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.315891   17890 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.319804   17890 pod_ready.go:92] pod "etcd-addons-443778" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.319826   17890 pod_ready.go:81] duration metric: took 3.928262ms waiting for pod "etcd-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.319838   17890 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.323507   17890 pod_ready.go:92] pod "kube-apiserver-addons-443778" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.323524   17890 pod_ready.go:81] duration metric: took 3.680086ms waiting for pod "kube-apiserver-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.323532   17890 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.327128   17890 pod_ready.go:92] pod "kube-controller-manager-addons-443778" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.327144   17890 pod_ready.go:81] duration metric: took 3.606184ms waiting for pod "kube-controller-manager-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.327153   17890 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pdmqd" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.398666   17890 pod_ready.go:92] pod "kube-proxy-pdmqd" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.398687   17890 pod_ready.go:81] duration metric: took 71.528483ms waiting for pod "kube-proxy-pdmqd" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.398697   17890 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.414217   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:13.613699   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:13.708977   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:13.712817   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:13.799451   17890 pod_ready.go:92] pod "kube-scheduler-addons-443778" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:13.799480   17890 pod_ready.go:81] duration metric: took 400.775427ms waiting for pod "kube-scheduler-addons-443778" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.799493   17890 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:13.914997   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:14.113144   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:14.208429   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:14.213353   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:14.415169   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:14.613445   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:14.708816   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:14.712837   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:14.914975   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:15.112481   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:15.209224   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:15.212590   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:15.414390   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:15.612857   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:15.708446   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:15.712175   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:15.804627   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:15.914524   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:16.113417   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:16.208730   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:16.212064   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:16.414499   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:16.613228   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:16.709630   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:16.713664   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:16.986806   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:17.114153   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:17.209495   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:17.214024   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:17.415546   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:17.613947   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:17.709009   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:17.712778   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:17.805341   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:17.914520   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:18.112829   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:18.209832   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:18.212863   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:18.414586   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:18.613188   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:18.708784   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:18.712441   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:18.915371   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:19.112778   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:19.208562   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:19.212659   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:19.499132   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:19.613494   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:19.708794   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:19.712652   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:19.805680   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:19.914971   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:20.113402   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:20.209897   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:20.212760   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:20.414947   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:20.612502   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:20.708844   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:20.712678   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:20.914343   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:21.112630   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:21.209028   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:21.212425   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:21.414553   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:21.614820   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:21.708740   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:21.712366   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:21.914275   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:22.112480   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:22.209296   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:22.212814   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:22.304776   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:22.414423   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:22.613039   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:22.708634   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:22.713350   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:22.914811   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:23.112946   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:23.208566   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:23.212125   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:23.414958   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:23.612969   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:23.708628   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:23.713464   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:23.915318   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:24.113444   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:24.209315   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:24.212940   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:24.414284   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:24.612715   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:24.708846   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:24.712673   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:24.805227   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:24.915410   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:25.113853   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:25.209169   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:25.212708   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:25.414488   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:25.613154   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:25.709586   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:25.714253   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:25.914543   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:26.113327   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:26.209091   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:26.213193   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:26.414852   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:26.613271   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:26.709180   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:26.713486   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:26.915461   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:27.113090   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:27.208381   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:27.213184   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:27.304076   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:27.416531   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:27.612753   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:27.708710   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:27.712445   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:27.914397   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:28.112590   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:28.208245   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:28.213074   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:28.414415   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:28.613271   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:28.708950   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:28.712212   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:28.914565   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:29.113022   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:29.208484   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:29.212143   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:29.305428   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:29.414041   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:29.612296   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:29.709028   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:29.712345   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:29.914434   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:30.113094   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:30.208451   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:30.212269   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:30.414322   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:30.612490   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:30.708351   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:30.712843   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:30.914637   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:31.113010   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:31.208996   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:31.212770   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:31.305936   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:31.488066   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:31.613670   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:31.711638   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:31.789939   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:31.989456   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:32.114223   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:32.209147   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:32.213097   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:32.414058   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:32.612455   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:32.709854   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:32.712775   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:32.915428   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:33.113696   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:33.210203   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:33.213096   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:33.415177   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:33.613770   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:33.709365   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:33.713280   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:33.806329   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:33.915237   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:34.113297   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:34.210251   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:34.217175   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:34.416910   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:34.612331   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:34.709049   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:34.712297   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:34.914515   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:35.112719   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:35.209036   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:35.212687   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:35.414817   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:35.613572   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:35.708859   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:35.712049   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:35.914563   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:36.112741   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:36.208282   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:36.212834   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:36.305455   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:36.414050   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:36.612207   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:36.708635   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:36.713176   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:36.915278   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:37.113153   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:37.210264   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:37.212805   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:37.414427   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:37.612696   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:37.708578   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:37.712414   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:37.914477   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:38.112500   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:38.208255   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:38.213136   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:38.415063   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:38.612490   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:38.708476   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:38.713636   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:38.804941   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:38.914829   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:39.112958   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:39.208534   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:39.212080   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:39.413953   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:39.613311   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:39.709322   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:39.712920   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:39.915075   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:40.189598   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:40.210996   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:40.213551   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:40.414681   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:40.613970   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:40.708295   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:40.713531   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:40.805162   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:40.915114   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:41.112356   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:41.209209   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:41.213271   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:41.415452   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:41.612913   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:41.709667   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:41.714220   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:41.914061   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:42.112945   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:42.209511   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:42.213337   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:42.414950   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:42.613973   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:42.708602   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:42.712635   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:42.915237   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:43.114527   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:43.208418   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:43.213458   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1221 18:06:43.305506   17890 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"False"
	I1221 18:06:43.414326   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:43.613963   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:43.709039   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:43.712624   17890 kapi.go:107] duration metric: took 1m0.505717389s to wait for kubernetes.io/minikube-addons=registry ...
	I1221 18:06:43.914575   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:44.112933   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:44.208477   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:44.414473   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:44.612868   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:44.708259   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:44.913746   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:45.112907   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:45.255014   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:45.360979   17890 pod_ready.go:92] pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:45.361002   17890 pod_ready.go:81] duration metric: took 31.56150074s waiting for pod "metrics-server-7c66d45ddc-gwrhh" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:45.361014   17890 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7jcgx" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:45.365416   17890 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7jcgx" in "kube-system" namespace has status "Ready":"True"
	I1221 18:06:45.365438   17890 pod_ready.go:81] duration metric: took 4.415555ms waiting for pod "nvidia-device-plugin-daemonset-7jcgx" in "kube-system" namespace to be "Ready" ...
	I1221 18:06:45.365463   17890 pod_ready.go:38] duration metric: took 33.566187897s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:06:45.365481   17890 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:06:45.365510   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 18:06:45.365564   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1221 18:06:45.395657   17890 cri.go:89] found id: "77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:06:45.395675   17890 cri.go:89] found id: ""
	I1221 18:06:45.395682   17890 logs.go:284] 1 containers: [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139]
	I1221 18:06:45.395720   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.398635   17890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 18:06:45.398682   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1221 18:06:45.414358   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:45.431817   17890 cri.go:89] found id: "fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:06:45.431838   17890 cri.go:89] found id: ""
	I1221 18:06:45.431848   17890 logs.go:284] 1 containers: [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c]
	I1221 18:06:45.431899   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.435367   17890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 18:06:45.435414   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1221 18:06:45.466343   17890 cri.go:89] found id: "8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:06:45.466361   17890 cri.go:89] found id: ""
	I1221 18:06:45.466368   17890 logs.go:284] 1 containers: [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8]
	I1221 18:06:45.466404   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.469263   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 18:06:45.469307   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1221 18:06:45.498838   17890 cri.go:89] found id: "c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:06:45.498858   17890 cri.go:89] found id: ""
	I1221 18:06:45.498868   17890 logs.go:284] 1 containers: [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a]
	I1221 18:06:45.498913   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.501854   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 18:06:45.501907   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1221 18:06:45.534039   17890 cri.go:89] found id: "25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:06:45.534055   17890 cri.go:89] found id: ""
	I1221 18:06:45.534064   17890 logs.go:284] 1 containers: [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842]
	I1221 18:06:45.534100   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.536879   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 18:06:45.536932   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1221 18:06:45.571074   17890 cri.go:89] found id: "8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:06:45.571097   17890 cri.go:89] found id: ""
	I1221 18:06:45.571106   17890 logs.go:284] 1 containers: [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e]
	I1221 18:06:45.571149   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.574220   17890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 18:06:45.574278   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1221 18:06:45.606899   17890 cri.go:89] found id: "070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:06:45.606922   17890 cri.go:89] found id: ""
	I1221 18:06:45.606929   17890 logs.go:284] 1 containers: [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae]
	I1221 18:06:45.606970   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:45.610074   17890 logs.go:123] Gathering logs for kube-controller-manager [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e] ...
	I1221 18:06:45.610099   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:06:45.613083   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:45.663529   17890 logs.go:123] Gathering logs for kube-scheduler [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a] ...
	I1221 18:06:45.663558   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:06:45.701276   17890 logs.go:123] Gathering logs for dmesg ...
	I1221 18:06:45.701305   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 18:06:45.708587   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:45.712491   17890 logs.go:123] Gathering logs for describe nodes ...
	I1221 18:06:45.712516   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1221 18:06:45.805956   17890 logs.go:123] Gathering logs for kube-apiserver [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139] ...
	I1221 18:06:45.805985   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:06:45.849004   17890 logs.go:123] Gathering logs for etcd [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c] ...
	I1221 18:06:45.849031   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:06:45.892275   17890 logs.go:123] Gathering logs for coredns [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8] ...
	I1221 18:06:45.892303   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:06:45.914468   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:45.925274   17890 logs.go:123] Gathering logs for kube-proxy [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842] ...
	I1221 18:06:45.925302   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:06:45.959008   17890 logs.go:123] Gathering logs for kindnet [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae] ...
	I1221 18:06:45.959032   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:06:45.991829   17890 logs.go:123] Gathering logs for kubelet ...
	I1221 18:06:45.991855   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1221 18:06:46.033273   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.033459   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.033699   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.033974   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:06:46.071507   17890 logs.go:123] Gathering logs for container status ...
	I1221 18:06:46.071536   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 18:06:46.111771   17890 logs.go:123] Gathering logs for CRI-O ...
	I1221 18:06:46.111797   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 18:06:46.112726   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:46.192651   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:06:46.192677   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1221 18:06:46.192729   17890 out.go:239] X Problems detected in kubelet:
	W1221 18:06:46.192738   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.192746   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.192755   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:46.192762   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:06:46.192769   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:06:46.192775   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:06:46.208601   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:46.414295   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:46.613666   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:46.709300   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:46.914385   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:47.113525   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:47.208748   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:47.413837   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:47.613200   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:47.709294   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:47.915112   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:48.112873   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:48.208794   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:48.414024   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:48.612419   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:48.709109   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:48.914509   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:49.112725   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:49.208424   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:49.413707   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:49.612955   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:49.708388   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:49.914693   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:50.112829   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:50.208376   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:50.414592   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:50.612950   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:50.708834   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:50.913932   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:51.113560   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:51.208047   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:51.414036   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:51.612362   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:51.709527   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:51.915585   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:52.113435   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:52.209700   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:52.415036   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:52.613160   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:52.708903   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:52.914363   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:53.112865   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:53.208488   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:53.414960   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:53.612444   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:53.708298   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:53.914964   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:54.112324   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:54.209068   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:54.414641   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:54.612693   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:54.708993   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:54.914146   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:55.112824   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:55.209222   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:55.413793   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:55.613501   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:55.709007   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:55.914652   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:56.112832   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:56.194192   17890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:06:56.206006   17890 api_server.go:72] duration metric: took 1m19.000357703s to wait for apiserver process to appear ...
	I1221 18:06:56.206039   17890 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:06:56.206074   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 18:06:56.206123   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1221 18:06:56.208936   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:56.236905   17890 cri.go:89] found id: "77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:06:56.236932   17890 cri.go:89] found id: ""
	I1221 18:06:56.236942   17890 logs.go:284] 1 containers: [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139]
	I1221 18:06:56.236994   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.240058   17890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 18:06:56.240118   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1221 18:06:56.270405   17890 cri.go:89] found id: "fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:06:56.270429   17890 cri.go:89] found id: ""
	I1221 18:06:56.270436   17890 logs.go:284] 1 containers: [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c]
	I1221 18:06:56.270486   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.273461   17890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 18:06:56.273519   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1221 18:06:56.306034   17890 cri.go:89] found id: "8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:06:56.306058   17890 cri.go:89] found id: ""
	I1221 18:06:56.306069   17890 logs.go:284] 1 containers: [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8]
	I1221 18:06:56.306117   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.309572   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 18:06:56.309636   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1221 18:06:56.343499   17890 cri.go:89] found id: "c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:06:56.343526   17890 cri.go:89] found id: ""
	I1221 18:06:56.343534   17890 logs.go:284] 1 containers: [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a]
	I1221 18:06:56.343573   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.346630   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 18:06:56.346678   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1221 18:06:56.390816   17890 cri.go:89] found id: "25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:06:56.390839   17890 cri.go:89] found id: ""
	I1221 18:06:56.390846   17890 logs.go:284] 1 containers: [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842]
	I1221 18:06:56.390897   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.394000   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 18:06:56.394049   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1221 18:06:56.414733   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:56.424772   17890 cri.go:89] found id: "8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:06:56.424789   17890 cri.go:89] found id: ""
	I1221 18:06:56.424796   17890 logs.go:284] 1 containers: [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e]
	I1221 18:06:56.424830   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.427785   17890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 18:06:56.427837   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1221 18:06:56.457411   17890 cri.go:89] found id: "070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:06:56.457430   17890 cri.go:89] found id: ""
	I1221 18:06:56.457438   17890 logs.go:284] 1 containers: [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae]
	I1221 18:06:56.457481   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:06:56.460291   17890 logs.go:123] Gathering logs for describe nodes ...
	I1221 18:06:56.460316   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1221 18:06:56.550875   17890 logs.go:123] Gathering logs for kube-proxy [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842] ...
	I1221 18:06:56.550902   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:06:56.582691   17890 logs.go:123] Gathering logs for kindnet [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae] ...
	I1221 18:06:56.582714   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:06:56.613465   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:56.615762   17890 logs.go:123] Gathering logs for kube-scheduler [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a] ...
	I1221 18:06:56.615790   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:06:56.654759   17890 logs.go:123] Gathering logs for kube-controller-manager [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e] ...
	I1221 18:06:56.654785   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:06:56.708199   17890 logs.go:123] Gathering logs for CRI-O ...
	I1221 18:06:56.708227   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 18:06:56.708462   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:56.779722   17890 logs.go:123] Gathering logs for kubelet ...
	I1221 18:06:56.779753   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1221 18:06:56.820079   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:56.820240   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:56.820379   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:56.820529   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:06:56.855357   17890 logs.go:123] Gathering logs for dmesg ...
	I1221 18:06:56.855386   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 18:06:56.866306   17890 logs.go:123] Gathering logs for kube-apiserver [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139] ...
	I1221 18:06:56.866328   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:06:56.908138   17890 logs.go:123] Gathering logs for etcd [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c] ...
	I1221 18:06:56.908163   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:06:56.914907   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:56.948253   17890 logs.go:123] Gathering logs for coredns [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8] ...
	I1221 18:06:56.948281   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:06:56.980651   17890 logs.go:123] Gathering logs for container status ...
	I1221 18:06:56.980675   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 18:06:57.017935   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:06:57.017958   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1221 18:06:57.018005   17890 out.go:239] X Problems detected in kubelet:
	W1221 18:06:57.018013   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:57.018018   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:57.018026   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:06:57.018032   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:06:57.018037   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:06:57.018045   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:06:57.113068   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:57.208562   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:57.413858   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:57.612309   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:57.708599   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:57.913854   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:58.113318   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:58.208815   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:58.414429   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:58.612993   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:58.710748   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:58.915052   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:59.112257   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:59.209081   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:59.414967   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:06:59.613005   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:06:59.708549   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:06:59.915227   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:00.112717   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:00.208364   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:00.419959   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:00.612733   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:00.709009   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:00.914166   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:01.113547   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:01.209703   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:01.415145   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:01.612880   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:01.708995   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:01.914808   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:02.115051   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:02.211187   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:02.487278   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:02.614073   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:02.710974   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:02.986814   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:03.114328   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:03.209753   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:03.487474   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:03.614258   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:03.709094   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:03.914318   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:04.113116   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:04.209508   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:04.414396   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:04.612923   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:04.709204   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:04.914712   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:05.113484   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:05.209387   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:05.414741   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:05.613723   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:05.708711   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:05.916853   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:06.113304   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:06.208913   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:06.414131   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:06.612505   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:06.718798   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:06.953870   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:07.018637   17890 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 18:07:07.022858   17890 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 18:07:07.024162   17890 api_server.go:141] control plane version: v1.28.4
	I1221 18:07:07.024187   17890 api_server.go:131] duration metric: took 10.818142178s to wait for apiserver health ...
	I1221 18:07:07.024197   17890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:07:07.024221   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1221 18:07:07.024275   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1221 18:07:07.057106   17890 cri.go:89] found id: "77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:07:07.057128   17890 cri.go:89] found id: ""
	I1221 18:07:07.057136   17890 logs.go:284] 1 containers: [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139]
	I1221 18:07:07.057175   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.060987   17890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1221 18:07:07.061058   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1221 18:07:07.096503   17890 cri.go:89] found id: "fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:07:07.096527   17890 cri.go:89] found id: ""
	I1221 18:07:07.096534   17890 logs.go:284] 1 containers: [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c]
	I1221 18:07:07.096573   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.099759   17890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1221 18:07:07.099807   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1221 18:07:07.127533   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:07.133787   17890 cri.go:89] found id: "8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:07:07.133810   17890 cri.go:89] found id: ""
	I1221 18:07:07.133818   17890 logs.go:284] 1 containers: [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8]
	I1221 18:07:07.133872   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.136901   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1221 18:07:07.136969   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1221 18:07:07.168931   17890 cri.go:89] found id: "c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:07:07.168960   17890 cri.go:89] found id: ""
	I1221 18:07:07.168970   17890 logs.go:284] 1 containers: [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a]
	I1221 18:07:07.169031   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.172318   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1221 18:07:07.172374   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1221 18:07:07.205549   17890 cri.go:89] found id: "25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:07:07.205574   17890 cri.go:89] found id: ""
	I1221 18:07:07.205583   17890 logs.go:284] 1 containers: [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842]
	I1221 18:07:07.205639   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.208792   17890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1221 18:07:07.208850   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1221 18:07:07.229887   17890 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1221 18:07:07.241746   17890 cri.go:89] found id: "8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:07:07.241765   17890 cri.go:89] found id: ""
	I1221 18:07:07.241772   17890 logs.go:284] 1 containers: [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e]
	I1221 18:07:07.241817   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.245444   17890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1221 18:07:07.245502   17890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1221 18:07:07.276068   17890 cri.go:89] found id: "070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:07:07.276090   17890 cri.go:89] found id: ""
	I1221 18:07:07.276098   17890 logs.go:284] 1 containers: [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae]
	I1221 18:07:07.276144   17890 ssh_runner.go:195] Run: which crictl
	I1221 18:07:07.279154   17890 logs.go:123] Gathering logs for CRI-O ...
	I1221 18:07:07.279173   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1221 18:07:07.350231   17890 logs.go:123] Gathering logs for container status ...
	I1221 18:07:07.350268   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1221 18:07:07.391141   17890 logs.go:123] Gathering logs for dmesg ...
	I1221 18:07:07.391176   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1221 18:07:07.403491   17890 logs.go:123] Gathering logs for describe nodes ...
	I1221 18:07:07.403517   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1221 18:07:07.461776   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:07.530071   17890 logs.go:123] Gathering logs for kube-apiserver [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139] ...
	I1221 18:07:07.530103   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139"
	I1221 18:07:07.612933   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:07.619128   17890 logs.go:123] Gathering logs for kube-scheduler [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a] ...
	I1221 18:07:07.619166   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a"
	I1221 18:07:07.699234   17890 logs.go:123] Gathering logs for kube-proxy [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842] ...
	I1221 18:07:07.699267   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842"
	I1221 18:07:07.709920   17890 kapi.go:107] duration metric: took 1m24.505932919s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1221 18:07:07.813682   17890 logs.go:123] Gathering logs for kube-controller-manager [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e] ...
	I1221 18:07:07.813716   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e"
	I1221 18:07:07.989577   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:08.192985   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:08.223405   17890 logs.go:123] Gathering logs for kindnet [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae] ...
	I1221 18:07:08.223435   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae"
	I1221 18:07:08.486934   17890 logs.go:123] Gathering logs for kubelet ...
	I1221 18:07:08.486962   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1221 18:07:08.503232   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:08.614483   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1221 18:07:08.625874   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.626090   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.626247   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.626461   17890 logs.go:138] Found kubelet problem: Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:07:08.666012   17890 logs.go:123] Gathering logs for etcd [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c] ...
	I1221 18:07:08.666052   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c"
	I1221 18:07:08.739817   17890 logs.go:123] Gathering logs for coredns [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8] ...
	I1221 18:07:08.739848   17890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8"
	I1221 18:07:08.823034   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:07:08.823064   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1221 18:07:08.823129   17890 out.go:239] X Problems detected in kubelet:
	W1221 18:07:08.823142   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.834827    1554 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.823155   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835180    1554 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.823174   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: W1221 18:05:36.835063    1554 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	W1221 18:07:08.823186   17890 out.go:239]   Dec 21 18:05:36 addons-443778 kubelet[1554]: E1221 18:05:36.835381    1554 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-443778" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-443778' and this object
	I1221 18:07:08.823197   17890 out.go:309] Setting ErrFile to fd 2...
	I1221 18:07:08.823209   17890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:07:08.914462   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:09.116563   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:09.414401   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:09.612485   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:09.914704   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:10.113596   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:10.414912   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:10.613542   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:10.914864   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:11.113382   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:11.414669   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:11.613723   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:11.914704   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:12.113222   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1221 18:07:12.414674   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:12.612711   17890 kapi.go:107] duration metric: took 1m28.505028875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1221 18:07:12.914496   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:13.414470   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:13.914438   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:14.414547   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:14.914333   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:15.413884   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:15.913799   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:16.413950   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:16.914031   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:17.413837   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:17.914956   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:18.414954   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:18.832261   17890 system_pods.go:59] 19 kube-system pods found
	I1221 18:07:18.832294   17890 system_pods.go:61] "coredns-5dd5756b68-4cr6h" [40df5c2f-53ec-43b5-9ddd-97f97ef19cbe] Running
	I1221 18:07:18.832300   17890 system_pods.go:61] "csi-hostpath-attacher-0" [5e0a173c-b610-4a4a-b04f-048ee9f132c5] Running
	I1221 18:07:18.832305   17890 system_pods.go:61] "csi-hostpath-resizer-0" [281e9dc2-3643-44a1-a13b-6ea4da1c51e0] Running
	I1221 18:07:18.832309   17890 system_pods.go:61] "csi-hostpathplugin-9hvzc" [a2341cc2-3127-4bb4-af78-3c1ae786d9a2] Running
	I1221 18:07:18.832312   17890 system_pods.go:61] "etcd-addons-443778" [9d8c5c01-8708-4f6e-8abb-d9005b058d1b] Running
	I1221 18:07:18.832316   17890 system_pods.go:61] "kindnet-7b74q" [dbdeb2e7-0aa0-4078-baea-e6b8bdc4da37] Running
	I1221 18:07:18.832321   17890 system_pods.go:61] "kube-apiserver-addons-443778" [32cea12a-93ed-4a51-9ee8-c37adaba530e] Running
	I1221 18:07:18.832328   17890 system_pods.go:61] "kube-controller-manager-addons-443778" [2d8de061-5d10-459c-b255-3113715be87a] Running
	I1221 18:07:18.832337   17890 system_pods.go:61] "kube-ingress-dns-minikube" [0eece98b-7a0d-4755-86cb-1a7f1dcce438] Running
	I1221 18:07:18.832346   17890 system_pods.go:61] "kube-proxy-pdmqd" [f9fecbe4-8260-43f0-8ddb-d2c95853a40d] Running
	I1221 18:07:18.832354   17890 system_pods.go:61] "kube-scheduler-addons-443778" [9985db6d-2448-4742-ae5a-443a202c8a60] Running
	I1221 18:07:18.832361   17890 system_pods.go:61] "metrics-server-7c66d45ddc-gwrhh" [457dc7a0-b171-45a7-845e-0ddc04fd4f40] Running
	I1221 18:07:18.832368   17890 system_pods.go:61] "nvidia-device-plugin-daemonset-7jcgx" [b94cc79e-0503-4022-93ae-c3ab0f768f0c] Running
	I1221 18:07:18.832374   17890 system_pods.go:61] "registry-hwsjn" [d18312cb-1683-475a-9ef7-6ab05125ff04] Running
	I1221 18:07:18.832380   17890 system_pods.go:61] "registry-proxy-vvt2k" [756697ed-d980-4a8f-817f-861ce02cbf7a] Running
	I1221 18:07:18.832387   17890 system_pods.go:61] "snapshot-controller-58dbcc7b99-6pp95" [c05cae34-ada5-4052-bfcb-8c37a8ad8b14] Running
	I1221 18:07:18.832393   17890 system_pods.go:61] "snapshot-controller-58dbcc7b99-j6lqt" [5cd11e77-b680-40d7-85a5-4e9e580b1ba3] Running
	I1221 18:07:18.832397   17890 system_pods.go:61] "storage-provisioner" [e732748e-af79-486f-855d-739ffcc11d51] Running
	I1221 18:07:18.832403   17890 system_pods.go:61] "tiller-deploy-7b677967b9-gtq28" [a26b272e-503d-4266-bcfc-55f9edf733a2] Running
	I1221 18:07:18.832410   17890 system_pods.go:74] duration metric: took 11.808205928s to wait for pod list to return data ...
	I1221 18:07:18.832419   17890 default_sa.go:34] waiting for default service account to be created ...
	I1221 18:07:18.834541   17890 default_sa.go:45] found service account: "default"
	I1221 18:07:18.834570   17890 default_sa.go:55] duration metric: took 2.143756ms for default service account to be created ...
	I1221 18:07:18.834578   17890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 18:07:18.842441   17890 system_pods.go:86] 19 kube-system pods found
	I1221 18:07:18.842473   17890 system_pods.go:89] "coredns-5dd5756b68-4cr6h" [40df5c2f-53ec-43b5-9ddd-97f97ef19cbe] Running
	I1221 18:07:18.842482   17890 system_pods.go:89] "csi-hostpath-attacher-0" [5e0a173c-b610-4a4a-b04f-048ee9f132c5] Running
	I1221 18:07:18.842489   17890 system_pods.go:89] "csi-hostpath-resizer-0" [281e9dc2-3643-44a1-a13b-6ea4da1c51e0] Running
	I1221 18:07:18.842496   17890 system_pods.go:89] "csi-hostpathplugin-9hvzc" [a2341cc2-3127-4bb4-af78-3c1ae786d9a2] Running
	I1221 18:07:18.842502   17890 system_pods.go:89] "etcd-addons-443778" [9d8c5c01-8708-4f6e-8abb-d9005b058d1b] Running
	I1221 18:07:18.842509   17890 system_pods.go:89] "kindnet-7b74q" [dbdeb2e7-0aa0-4078-baea-e6b8bdc4da37] Running
	I1221 18:07:18.842515   17890 system_pods.go:89] "kube-apiserver-addons-443778" [32cea12a-93ed-4a51-9ee8-c37adaba530e] Running
	I1221 18:07:18.842526   17890 system_pods.go:89] "kube-controller-manager-addons-443778" [2d8de061-5d10-459c-b255-3113715be87a] Running
	I1221 18:07:18.842539   17890 system_pods.go:89] "kube-ingress-dns-minikube" [0eece98b-7a0d-4755-86cb-1a7f1dcce438] Running
	I1221 18:07:18.842546   17890 system_pods.go:89] "kube-proxy-pdmqd" [f9fecbe4-8260-43f0-8ddb-d2c95853a40d] Running
	I1221 18:07:18.842557   17890 system_pods.go:89] "kube-scheduler-addons-443778" [9985db6d-2448-4742-ae5a-443a202c8a60] Running
	I1221 18:07:18.842564   17890 system_pods.go:89] "metrics-server-7c66d45ddc-gwrhh" [457dc7a0-b171-45a7-845e-0ddc04fd4f40] Running
	I1221 18:07:18.842573   17890 system_pods.go:89] "nvidia-device-plugin-daemonset-7jcgx" [b94cc79e-0503-4022-93ae-c3ab0f768f0c] Running
	I1221 18:07:18.842580   17890 system_pods.go:89] "registry-hwsjn" [d18312cb-1683-475a-9ef7-6ab05125ff04] Running
	I1221 18:07:18.842589   17890 system_pods.go:89] "registry-proxy-vvt2k" [756697ed-d980-4a8f-817f-861ce02cbf7a] Running
	I1221 18:07:18.842599   17890 system_pods.go:89] "snapshot-controller-58dbcc7b99-6pp95" [c05cae34-ada5-4052-bfcb-8c37a8ad8b14] Running
	I1221 18:07:18.842608   17890 system_pods.go:89] "snapshot-controller-58dbcc7b99-j6lqt" [5cd11e77-b680-40d7-85a5-4e9e580b1ba3] Running
	I1221 18:07:18.842617   17890 system_pods.go:89] "storage-provisioner" [e732748e-af79-486f-855d-739ffcc11d51] Running
	I1221 18:07:18.842626   17890 system_pods.go:89] "tiller-deploy-7b677967b9-gtq28" [a26b272e-503d-4266-bcfc-55f9edf733a2] Running
	I1221 18:07:18.842638   17890 system_pods.go:126] duration metric: took 8.053378ms to wait for k8s-apps to be running ...
	I1221 18:07:18.842650   17890 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:07:18.842708   17890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:07:18.854295   17890 system_svc.go:56] duration metric: took 11.639698ms WaitForService to wait for kubelet.
	I1221 18:07:18.854318   17890 kubeadm.go:581] duration metric: took 1m41.648675659s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:07:18.854343   17890 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:07:18.856611   17890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 18:07:18.856640   17890 node_conditions.go:123] node cpu capacity is 8
	I1221 18:07:18.856654   17890 node_conditions.go:105] duration metric: took 2.305426ms to run NodePressure ...
	I1221 18:07:18.856666   17890 start.go:228] waiting for startup goroutines ...
	I1221 18:07:18.913997   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:19.413678   17890 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1221 18:07:19.914624   17890 kapi.go:107] duration metric: took 1m35.003597291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1221 18:07:19.916592   17890 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-443778 cluster.
	I1221 18:07:19.917950   17890 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1221 18:07:19.919351   17890 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1221 18:07:19.920694   17890 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1221 18:07:19.921934   17890 addons.go:508] enable addons completed in 1m43.224925045s: enabled=[helm-tiller nvidia-device-plugin ingress-dns cloud-spanner default-storageclass storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1221 18:07:19.921968   17890 start.go:233] waiting for cluster config update ...
	I1221 18:07:19.921984   17890 start.go:242] writing updated cluster config ...
	I1221 18:07:19.922201   17890 ssh_runner.go:195] Run: rm -f paused
	I1221 18:07:19.967414   17890 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1221 18:07:19.969082   17890 out.go:177] * Done! kubectl is now configured to use "addons-443778" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 18:10:05 addons-443778 crio[951]: time="2023-12-21 18:10:05.995393583Z" level=info msg="Removing container: 5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f" id=52130383-a76e-4207-9db4-a99ae0a8e5fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.008544187Z" level=info msg="Removed container 5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=52130383-a76e-4207-9db4-a99ae0a8e5fe name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.936131416Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=1bf7c609-26c1-496f-b4ef-4aeed5ab2707 name=/runtime.v1.ImageService/PullImage
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.936872081Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=6b5f7594-33d3-4098-b2cb-925772847a0a name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.937757997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6b5f7594-33d3-4098-b2cb-925772847a0a name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.938692712Z" level=info msg="Creating container: default/hello-world-app-5d77478584-dvhpr/hello-world-app" id=109fbe3f-9475-4263-80e5-a8575c6592c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 18:10:06 addons-443778 crio[951]: time="2023-12-21 18:10:06.938770310Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 21 18:10:07 addons-443778 crio[951]: time="2023-12-21 18:10:07.008911191Z" level=info msg="Created container 17cc2dd654160ddf2615a62d864c14719fcaba71d9a3feb4631c951f21b153d0: default/hello-world-app-5d77478584-dvhpr/hello-world-app" id=109fbe3f-9475-4263-80e5-a8575c6592c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 18:10:07 addons-443778 crio[951]: time="2023-12-21 18:10:07.009277698Z" level=info msg="Starting container: 17cc2dd654160ddf2615a62d864c14719fcaba71d9a3feb4631c951f21b153d0" id=957eb249-4bf7-4cc6-9f61-03f4e9cf0ef0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 18:10:07 addons-443778 crio[951]: time="2023-12-21 18:10:07.016308987Z" level=info msg="Started container" PID=10536 containerID=17cc2dd654160ddf2615a62d864c14719fcaba71d9a3feb4631c951f21b153d0 description=default/hello-world-app-5d77478584-dvhpr/hello-world-app id=957eb249-4bf7-4cc6-9f61-03f4e9cf0ef0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=30cb63148982b8a60a981e8f69b8f99547afa6e60d08802efd2a1684cd6d87b0
	Dec 21 18:10:07 addons-443778 crio[951]: time="2023-12-21 18:10:07.502962477Z" level=info msg="Stopping container: fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747 (timeout: 2s)" id=9c260107-a131-49d1-ada2-c2b46d663a40 name=/runtime.v1.RuntimeService/StopContainer
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.508355844Z" level=warning msg="Stopping container fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9c260107-a131-49d1-ada2-c2b46d663a40 name=/runtime.v1.RuntimeService/StopContainer
	Dec 21 18:10:09 addons-443778 conmon[6053]: conmon fe3517512c798004e467 <ninfo>: container 6065 exited with status 137
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.637546288Z" level=info msg="Stopped container fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747: ingress-nginx/ingress-nginx-controller-69cff4fd79-vdnk7/controller" id=9c260107-a131-49d1-ada2-c2b46d663a40 name=/runtime.v1.RuntimeService/StopContainer
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.638062180Z" level=info msg="Stopping pod sandbox: 0ac1a9f9a6489e79d7b3f7fbfd2fe1af3d00c1e86e95f0cca1310aa4bd8793be" id=06bfbe89-0cbe-4b9e-a0fc-fef458abc5d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.640948609Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-FTPY5F2TM6UVGW3C - [0:0]\n:KUBE-HP-FE36HRQO6FLC4MPJ - [0:0]\n-X KUBE-HP-FE36HRQO6FLC4MPJ\n-X KUBE-HP-FTPY5F2TM6UVGW3C\nCOMMIT\n"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.642448593Z" level=info msg="Closing host port tcp:80"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.642484480Z" level=info msg="Closing host port tcp:443"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.643860323Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.643878613Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.644044382Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-vdnk7 Namespace:ingress-nginx ID:0ac1a9f9a6489e79d7b3f7fbfd2fe1af3d00c1e86e95f0cca1310aa4bd8793be UID:baec8a62-37d3-41ac-a6ab-35af85136983 NetNS:/var/run/netns/0053183e-bb13-4d40-86d4-ff4f462a4900 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.644194236Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-vdnk7 from CNI network \"kindnet\" (type=ptp)"
	Dec 21 18:10:09 addons-443778 crio[951]: time="2023-12-21 18:10:09.682271245Z" level=info msg="Stopped pod sandbox: 0ac1a9f9a6489e79d7b3f7fbfd2fe1af3d00c1e86e95f0cca1310aa4bd8793be" id=06bfbe89-0cbe-4b9e-a0fc-fef458abc5d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 21 18:10:10 addons-443778 crio[951]: time="2023-12-21 18:10:10.006086272Z" level=info msg="Removing container: fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747" id=f7eee9c3-7e93-468a-a0e2-40ffcbffe549 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 21 18:10:10 addons-443778 crio[951]: time="2023-12-21 18:10:10.018568133Z" level=info msg="Removed container fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747: ingress-nginx/ingress-nginx-controller-69cff4fd79-vdnk7/controller" id=f7eee9c3-7e93-468a-a0e2-40ffcbffe549 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17cc2dd654160       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   30cb63148982b       hello-world-app-5d77478584-dvhpr
	030eff80c86fa       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   06265907cab9b       headlamp-777fd4b855-r2xfj
	b4210d01bc500       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   3e4737d833ad8       nginx
	44b0ab5493972       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   bf0fa29c05ee9       gcp-auth-d4c87556c-sc24j
	e52c1e6213f6e       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   c51f8ca72be13       yakd-dashboard-9947fc6bf-v4mrb
	2921efe9c40aa       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   97f9b9229abb7       ingress-nginx-admission-patch-q4k4c
	26a66edcd10cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   61e4021bc323f       ingress-nginx-admission-create-gqcl7
	8038eb3946653       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   5d05d1a0fe249       coredns-5dd5756b68-4cr6h
	2cdeb7ccae3f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   634186c5491e2       storage-provisioner
	070cc0f84091e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   aa3bd4febdd62       kindnet-7b74q
	25307789ff85e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   2f5be08e64ffa       kube-proxy-pdmqd
	77392f2d0fc20       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   147f7d79f1778       kube-apiserver-addons-443778
	c40ad4bb410e5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   c157d731afd85       kube-scheduler-addons-443778
	8bb787b0999d6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   edf076606ff3d       kube-controller-manager-addons-443778
	fb0f9f76df2b2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   50cb7d2219e11       etcd-addons-443778
	
	
	==> coredns [8038eb3946653b44971eb3febec4ffc29622feabec0c10d460037eaef51716a8] <==
	[INFO] 10.244.0.10:60084 - 38588 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068369s
	[INFO] 10.244.0.10:49445 - 12482 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.002414152s
	[INFO] 10.244.0.10:49445 - 14020 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003236636s
	[INFO] 10.244.0.10:58679 - 22256 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.002192434s
	[INFO] 10.244.0.10:58679 - 28661 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003688342s
	[INFO] 10.244.0.10:58946 - 21879 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003893766s
	[INFO] 10.244.0.10:58946 - 22896 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00395491s
	[INFO] 10.244.0.10:33303 - 41789 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070677s
	[INFO] 10.244.0.10:33303 - 39480 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000123338s
	[INFO] 10.244.0.21:37837 - 37994 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213552s
	[INFO] 10.244.0.21:51949 - 29060 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027685s
	[INFO] 10.244.0.21:58202 - 9548 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091844s
	[INFO] 10.244.0.21:53032 - 14372 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010849s
	[INFO] 10.244.0.21:41014 - 63066 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089682s
	[INFO] 10.244.0.21:44921 - 7861 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152113s
	[INFO] 10.244.0.21:59400 - 58004 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005839232s
	[INFO] 10.244.0.21:44637 - 42012 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005870158s
	[INFO] 10.244.0.21:54471 - 58268 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004864878s
	[INFO] 10.244.0.21:42694 - 63312 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005583184s
	[INFO] 10.244.0.21:46054 - 54532 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005667061s
	[INFO] 10.244.0.21:53709 - 54340 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005680754s
	[INFO] 10.244.0.21:45067 - 2288 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000579009s
	[INFO] 10.244.0.21:60377 - 58832 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000665687s
	[INFO] 10.244.0.23:42962 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000178593s
	[INFO] 10.244.0.23:53575 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111286s
	
	
	==> describe nodes <==
	Name:               addons-443778
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-443778
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=addons-443778
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_21T18_05_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-443778
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:05:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-443778
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:10:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:08:27 +0000   Thu, 21 Dec 2023 18:05:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:08:27 +0000   Thu, 21 Dec 2023 18:05:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:08:27 +0000   Thu, 21 Dec 2023 18:05:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:08:27 +0000   Thu, 21 Dec 2023 18:06:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-443778
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6ea6d939234b8c837a52bb7b0bceab
	  System UUID:                fc0ea402-fb31-4025-94af-9fd202fcf64f
	  Boot ID:                    d99d8f8f-1497-48b1-8406-284c1d2cae5c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-dvhpr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-d4c87556c-sc24j                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  headlamp                    headlamp-777fd4b855-r2xfj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-5dd5756b68-4cr6h                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m38s
	  kube-system                 etcd-addons-443778                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kindnet-7b74q                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m38s
	  kube-system                 kube-apiserver-addons-443778             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-controller-manager-addons-443778    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-proxy-pdmqd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-scheduler-addons-443778             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-v4mrb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             348Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node addons-443778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node addons-443778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x8 over 4m57s)  kubelet          Node addons-443778 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s                  kubelet          Node addons-443778 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s                  kubelet          Node addons-443778 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s                  kubelet          Node addons-443778 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m38s                  node-controller  Node addons-443778 event: Registered Node addons-443778 in Controller
	  Normal  NodeReady                4m3s                   kubelet          Node addons-443778 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.008919] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004643] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001641] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001310] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001258] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.001854] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000822] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000935] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000746] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.348090] kauditd_printk_skb: 36 callbacks suppressed
	[Dec21 18:07] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[  +1.016357] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[  +2.015835] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[Dec21 18:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[  +8.191344] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[ +16.126774] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	[ +32.509601] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 92 16 58 bb b7 69 6a 34 77 91 21 cd 08 00
	
	
	==> etcd [fb0f9f76df2b21bea5096eb160667148a6262c951260a4239fb7b9e4b48f6a8c] <==
	{"level":"info","ts":"2023-12-21T18:05:40.303767Z","caller":"traceutil/trace.go:171","msg":"trace[1999100037] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"109.020602ms","start":"2023-12-21T18:05:40.194736Z","end":"2023-12-21T18:05:40.303756Z","steps":["trace[1999100037] 'process raft request'  (duration: 108.51492ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-21T18:05:40.594529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.780359ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025967091350712 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/cloud-spanner-emulator.17a2eaf370d85059\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/cloud-spanner-emulator.17a2eaf370d85059\" value_size:646 lease:8128025967091349955 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-21T18:05:40.594797Z","caller":"traceutil/trace.go:171","msg":"trace[1717559740] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"108.84127ms","start":"2023-12-21T18:05:40.485945Z","end":"2023-12-21T18:05:40.594786Z","steps":["trace[1717559740] 'process raft request'  (duration: 108.797238ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.595463Z","caller":"traceutil/trace.go:171","msg":"trace[1696931492] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"109.458071ms","start":"2023-12-21T18:05:40.485796Z","end":"2023-12-21T18:05:40.595254Z","steps":["trace[1696931492] 'compare'  (duration: 104.389629ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.595629Z","caller":"traceutil/trace.go:171","msg":"trace[167635691] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"109.79107ms","start":"2023-12-21T18:05:40.485825Z","end":"2023-12-21T18:05:40.595617Z","steps":["trace[167635691] 'process raft request'  (duration: 108.855667ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.689852Z","caller":"traceutil/trace.go:171","msg":"trace[237989628] linearizableReadLoop","detail":"{readStateIndex:445; appliedIndex:441; }","duration":"179.169876ms","start":"2023-12-21T18:05:40.510666Z","end":"2023-12-21T18:05:40.689836Z","steps":["trace[237989628] 'read index received'  (duration: 83.870413ms)","trace[237989628] 'applied index is now lower than readState.Index'  (duration: 95.298793ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-21T18:05:40.690006Z","caller":"traceutil/trace.go:171","msg":"trace[1429248426] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"183.36898ms","start":"2023-12-21T18:05:40.506628Z","end":"2023-12-21T18:05:40.689997Z","steps":["trace[1429248426] 'process raft request'  (duration: 183.127165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-21T18:05:40.690267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.63103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-21T18:05:40.69033Z","caller":"traceutil/trace.go:171","msg":"trace[537517039] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:433; }","duration":"179.704636ms","start":"2023-12-21T18:05:40.510616Z","end":"2023-12-21T18:05:40.69032Z","steps":["trace[537517039] 'agreement among raft nodes before linearized reading'  (duration: 179.583212ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.696717Z","caller":"traceutil/trace.go:171","msg":"trace[1604142314] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"102.429166ms","start":"2023-12-21T18:05:40.594272Z","end":"2023-12-21T18:05:40.696701Z","steps":["trace[1604142314] 'process raft request'  (duration: 102.220957ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-21T18:05:40.710508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.119321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-21T18:05:40.791846Z","caller":"traceutil/trace.go:171","msg":"trace[551926607] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:0; response_revision:435; }","duration":"200.432619ms","start":"2023-12-21T18:05:40.591366Z","end":"2023-12-21T18:05:40.791799Z","steps":["trace[551926607] 'agreement among raft nodes before linearized reading'  (duration: 118.109014ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-21T18:05:40.713097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.59474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-12-21T18:05:40.713139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.47735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
	{"level":"warn","ts":"2023-12-21T18:05:40.713162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.848576ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-21T18:05:40.79239Z","caller":"traceutil/trace.go:171","msg":"trace[373244368] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:438; }","duration":"206.064588ms","start":"2023-12-21T18:05:40.58631Z","end":"2023-12-21T18:05:40.792374Z","steps":["trace[373244368] 'agreement among raft nodes before linearized reading'  (duration: 126.842567ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.79261Z","caller":"traceutil/trace.go:171","msg":"trace[1622569722] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:438; }","duration":"206.10287ms","start":"2023-12-21T18:05:40.586485Z","end":"2023-12-21T18:05:40.792588Z","steps":["trace[1622569722] 'agreement among raft nodes before linearized reading'  (duration: 126.58208ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-21T18:05:40.792811Z","caller":"traceutil/trace.go:171","msg":"trace[746888492] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:438; }","duration":"206.141334ms","start":"2023-12-21T18:05:40.586655Z","end":"2023-12-21T18:05:40.792796Z","steps":["trace[746888492] 'agreement among raft nodes before linearized reading'  (duration: 126.46271ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-21T18:05:40.80608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.571171ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-21T18:05:40.885705Z","caller":"traceutil/trace.go:171","msg":"trace[1498696615] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:438; }","duration":"205.731127ms","start":"2023-12-21T18:05:40.601484Z","end":"2023-12-21T18:05:40.807215Z","steps":["trace[1498696615] 'agreement among raft nodes before linearized reading'  (duration: 109.282534ms)","trace[1498696615] 'get authentication metadata'  (duration: 92.393067ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-21T18:07:34.926171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.926784ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025967091353544 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-u33lirks4xscf4pjcjsiknt4ki\" mod_revision:1286 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-u33lirks4xscf4pjcjsiknt4ki\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-u33lirks4xscf4pjcjsiknt4ki\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-21T18:07:34.926248Z","caller":"traceutil/trace.go:171","msg":"trace[825117347] linearizableReadLoop","detail":"{readStateIndex:1428; appliedIndex:1426; }","duration":"211.141035ms","start":"2023-12-21T18:07:34.715095Z","end":"2023-12-21T18:07:34.926236Z","steps":["trace[825117347] 'read index received'  (duration: 1.869631ms)","trace[825117347] 'applied index is now lower than readState.Index'  (duration: 209.270311ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-21T18:07:34.926267Z","caller":"traceutil/trace.go:171","msg":"trace[1637184594] transaction","detail":"{read_only:false; response_revision:1385; number_of_response:1; }","duration":"253.467872ms","start":"2023-12-21T18:07:34.672784Z","end":"2023-12-21T18:07:34.926252Z","steps":["trace[1637184594] 'process raft request'  (duration: 95.291312ms)","trace[1637184594] 'compare'  (duration: 157.834045ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-21T18:07:34.926304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.224891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-21T18:07:34.926327Z","caller":"traceutil/trace.go:171","msg":"trace[318409829] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1385; }","duration":"211.248671ms","start":"2023-12-21T18:07:34.71507Z","end":"2023-12-21T18:07:34.926319Z","steps":["trace[318409829] 'agreement among raft nodes before linearized reading'  (duration: 211.193881ms)"],"step_count":1}
	
	
	==> gcp-auth [44b0ab549397212774ea02f6e5db8363f2af515159804b2ec55bb21cec4b9f1a] <==
	2023/12/21 18:07:19 GCP Auth Webhook started!
	2023/12/21 18:07:25 Ready to marshal response ...
	2023/12/21 18:07:25 Ready to write response ...
	2023/12/21 18:07:31 Ready to marshal response ...
	2023/12/21 18:07:31 Ready to write response ...
	2023/12/21 18:07:37 Ready to marshal response ...
	2023/12/21 18:07:37 Ready to write response ...
	2023/12/21 18:07:37 Ready to marshal response ...
	2023/12/21 18:07:37 Ready to write response ...
	2023/12/21 18:07:43 Ready to marshal response ...
	2023/12/21 18:07:43 Ready to write response ...
	2023/12/21 18:07:43 Ready to marshal response ...
	2023/12/21 18:07:43 Ready to write response ...
	2023/12/21 18:07:43 Ready to marshal response ...
	2023/12/21 18:07:43 Ready to write response ...
	2023/12/21 18:07:43 Ready to marshal response ...
	2023/12/21 18:07:43 Ready to write response ...
	2023/12/21 18:07:53 Ready to marshal response ...
	2023/12/21 18:07:53 Ready to write response ...
	2023/12/21 18:07:58 Ready to marshal response ...
	2023/12/21 18:07:58 Ready to write response ...
	2023/12/21 18:08:19 Ready to marshal response ...
	2023/12/21 18:08:19 Ready to write response ...
	2023/12/21 18:10:04 Ready to marshal response ...
	2023/12/21 18:10:04 Ready to write response ...
	
	
	==> kernel <==
	 18:10:14 up 52 min,  0 users,  load average: 0.36, 0.50, 0.27
	Linux addons-443778 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [070cc0f84091ebd995501a64027e28ccbcf92560ab8759c80b304d7691fa84ae] <==
	I1221 18:08:11.236644       1 main.go:227] handling current node
	I1221 18:08:21.247898       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:08:21.247919       1 main.go:227] handling current node
	I1221 18:08:31.259433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:08:31.259454       1 main.go:227] handling current node
	I1221 18:08:41.263081       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:08:41.263102       1 main.go:227] handling current node
	I1221 18:08:51.265873       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:08:51.265895       1 main.go:227] handling current node
	I1221 18:09:01.271502       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:01.271524       1 main.go:227] handling current node
	I1221 18:09:11.282595       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:11.282616       1 main.go:227] handling current node
	I1221 18:09:21.286261       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:21.286283       1 main.go:227] handling current node
	I1221 18:09:31.296971       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:31.296993       1 main.go:227] handling current node
	I1221 18:09:41.300679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:41.300701       1 main.go:227] handling current node
	I1221 18:09:51.312912       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:09:51.312933       1 main.go:227] handling current node
	I1221 18:10:01.316588       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:10:01.316616       1 main.go:227] handling current node
	I1221 18:10:11.329360       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:10:11.329383       1 main.go:227] handling current node
	
	
	==> kube-apiserver [77392f2d0fc202062d25580d1bca55bdea8719a6a53721482df6918d03daf139] <==
	I1221 18:07:43.422283       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.188.200"}
	I1221 18:07:43.563870       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.45.36"}
	I1221 18:07:51.382501       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1221 18:08:08.357524       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1221 18:08:14.518700       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1221 18:08:36.188468       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.188525       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.194496       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.194619       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.200894       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.201010       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.201838       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.201875       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.209851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.209958       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.215359       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.215406       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.220798       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.220834       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1221 18:08:36.222712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1221 18:08:36.222800       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1221 18:08:37.202020       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1221 18:08:37.221513       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1221 18:08:37.231544       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1221 18:10:04.551004       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.198.159"}
	
	
	==> kube-controller-manager [8bb787b0999d62f806910fc33330baf46c551acee69f1c0a16a1fa3e587b3d3e] <==
	E1221 18:09:08.460536       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:10.107156       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:10.107185       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:13.605674       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:13.605704       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:13.704628       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:13.704656       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:37.095992       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:37.096020       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:39.337016       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:39.337046       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:39.520077       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:39.520108       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1221 18:09:50.599968       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1221 18:09:50.600009       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1221 18:10:04.396769       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1221 18:10:04.405180       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-dvhpr"
	I1221 18:10:04.409650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.990771ms"
	I1221 18:10:04.419751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.055578ms"
	I1221 18:10:04.419819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.734µs"
	I1221 18:10:06.494495       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1221 18:10:06.495415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.831µs"
	I1221 18:10:06.498312       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1221 18:10:08.013349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.246054ms"
	I1221 18:10:08.013426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.827µs"
	
	
	==> kube-proxy [25307789ff85e9cd92331b3cd9c2da8154332ed93b359cd3a11efca04a42a842] <==
	I1221 18:05:41.688594       1 server_others.go:69] "Using iptables proxy"
	I1221 18:05:41.792843       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1221 18:05:42.096739       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 18:05:42.106662       1 server_others.go:152] "Using iptables Proxier"
	I1221 18:05:42.106746       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 18:05:42.106778       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 18:05:42.106821       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 18:05:42.107082       1 server.go:846] "Version info" version="v1.28.4"
	I1221 18:05:42.107299       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 18:05:42.108173       1 config.go:188] "Starting service config controller"
	I1221 18:05:42.109176       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 18:05:42.108546       1 config.go:97] "Starting endpoint slice config controller"
	I1221 18:05:42.109254       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 18:05:42.108976       1 config.go:315] "Starting node config controller"
	I1221 18:05:42.109267       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 18:05:42.285421       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1221 18:05:42.288158       1 shared_informer.go:318] Caches are synced for node config
	I1221 18:05:42.385426       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [c40ad4bb410e506fa822a49e2194b5b8d2e6ee1398933a7b66da5a2d40b2959a] <==
	W1221 18:05:20.998660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1221 18:05:20.998684       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1221 18:05:20.999222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1221 18:05:20.999246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1221 18:05:20.999371       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:05:20.999393       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1221 18:05:20.999403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 18:05:20.999414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1221 18:05:20.999422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1221 18:05:20.999431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1221 18:05:21.824281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1221 18:05:21.824309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1221 18:05:21.895742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:05:21.895778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1221 18:05:21.936024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1221 18:05:21.936055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1221 18:05:21.969302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:05:21.969327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1221 18:05:21.995989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1221 18:05:21.996017       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1221 18:05:22.026243       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:05:22.026267       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1221 18:05:22.110873       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:05:22.110900       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1221 18:05:23.790765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 18:10:04 addons-443778 kubelet[1554]: I1221 18:10:04.528703    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6c7d41fe-e44f-4e46-87d4-741abcd996d5-gcp-creds\") pod \"hello-world-app-5d77478584-dvhpr\" (UID: \"6c7d41fe-e44f-4e46-87d4-741abcd996d5\") " pod="default/hello-world-app-5d77478584-dvhpr"
	Dec 21 18:10:04 addons-443778 kubelet[1554]: I1221 18:10:04.528777    1554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzslv\" (UniqueName: \"kubernetes.io/projected/6c7d41fe-e44f-4e46-87d4-741abcd996d5-kube-api-access-wzslv\") pod \"hello-world-app-5d77478584-dvhpr\" (UID: \"6c7d41fe-e44f-4e46-87d4-741abcd996d5\") " pod="default/hello-world-app-5d77478584-dvhpr"
	Dec 21 18:10:04 addons-443778 kubelet[1554]: W1221 18:10:04.817948    1554 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/08fa7e5d4c9c5ad10799e09231a811783c6a6c73102208b6e3b5ac4f4c31e906/crio-30cb63148982b8a60a981e8f69b8f99547afa6e60d08802efd2a1684cd6d87b0 WatchSource:0}: Error finding container 30cb63148982b8a60a981e8f69b8f99547afa6e60d08802efd2a1684cd6d87b0: Status 404 returned error can't find the container with id 30cb63148982b8a60a981e8f69b8f99547afa6e60d08802efd2a1684cd6d87b0
	Dec 21 18:10:05 addons-443778 kubelet[1554]: I1221 18:10:05.535871    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghwbh\" (UniqueName: \"kubernetes.io/projected/0eece98b-7a0d-4755-86cb-1a7f1dcce438-kube-api-access-ghwbh\") pod \"0eece98b-7a0d-4755-86cb-1a7f1dcce438\" (UID: \"0eece98b-7a0d-4755-86cb-1a7f1dcce438\") "
	Dec 21 18:10:05 addons-443778 kubelet[1554]: I1221 18:10:05.537579    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eece98b-7a0d-4755-86cb-1a7f1dcce438-kube-api-access-ghwbh" (OuterVolumeSpecName: "kube-api-access-ghwbh") pod "0eece98b-7a0d-4755-86cb-1a7f1dcce438" (UID: "0eece98b-7a0d-4755-86cb-1a7f1dcce438"). InnerVolumeSpecName "kube-api-access-ghwbh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 21 18:10:05 addons-443778 kubelet[1554]: I1221 18:10:05.636905    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ghwbh\" (UniqueName: \"kubernetes.io/projected/0eece98b-7a0d-4755-86cb-1a7f1dcce438-kube-api-access-ghwbh\") on node \"addons-443778\" DevicePath \"\""
	Dec 21 18:10:05 addons-443778 kubelet[1554]: I1221 18:10:05.994431    1554 scope.go:117] "RemoveContainer" containerID="5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f"
	Dec 21 18:10:06 addons-443778 kubelet[1554]: I1221 18:10:06.008805    1554 scope.go:117] "RemoveContainer" containerID="5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f"
	Dec 21 18:10:06 addons-443778 kubelet[1554]: E1221 18:10:06.009223    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f\": container with ID starting with 5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f not found: ID does not exist" containerID="5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f"
	Dec 21 18:10:06 addons-443778 kubelet[1554]: I1221 18:10:06.009294    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f"} err="failed to get container status \"5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f\": rpc error: code = NotFound desc = could not find container \"5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f\": container with ID starting with 5631963d0545faed45cd819bfe1e5fe8a2ee482a858858e8924249c65de7d42f not found: ID does not exist"
	Dec 21 18:10:07 addons-443778 kubelet[1554]: I1221 18:10:07.799157    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0eece98b-7a0d-4755-86cb-1a7f1dcce438" path="/var/lib/kubelet/pods/0eece98b-7a0d-4755-86cb-1a7f1dcce438/volumes"
	Dec 21 18:10:07 addons-443778 kubelet[1554]: I1221 18:10:07.799482    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4b0f628c-dcca-4b65-b0e9-b0928abfd0b8" path="/var/lib/kubelet/pods/4b0f628c-dcca-4b65-b0e9-b0928abfd0b8/volumes"
	Dec 21 18:10:07 addons-443778 kubelet[1554]: I1221 18:10:07.799774    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a766d217-7fda-446f-90bd-faa8b462da8f" path="/var/lib/kubelet/pods/a766d217-7fda-446f-90bd-faa8b462da8f/volumes"
	Dec 21 18:10:08 addons-443778 kubelet[1554]: I1221 18:10:08.008169    1554 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-dvhpr" podStartSLOduration=1.893247114 podCreationTimestamp="2023-12-21 18:10:04 +0000 UTC" firstStartedPulling="2023-12-21 18:10:04.821539723 +0000 UTC m=+281.141595464" lastFinishedPulling="2023-12-21 18:10:06.936424864 +0000 UTC m=+283.256480605" observedRunningTime="2023-12-21 18:10:08.007826582 +0000 UTC m=+284.327882338" watchObservedRunningTime="2023-12-21 18:10:08.008132255 +0000 UTC m=+284.328188010"
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.863722    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/baec8a62-37d3-41ac-a6ab-35af85136983-webhook-cert\") pod \"baec8a62-37d3-41ac-a6ab-35af85136983\" (UID: \"baec8a62-37d3-41ac-a6ab-35af85136983\") "
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.863829    1554 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndmcr\" (UniqueName: \"kubernetes.io/projected/baec8a62-37d3-41ac-a6ab-35af85136983-kube-api-access-ndmcr\") pod \"baec8a62-37d3-41ac-a6ab-35af85136983\" (UID: \"baec8a62-37d3-41ac-a6ab-35af85136983\") "
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.865589    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baec8a62-37d3-41ac-a6ab-35af85136983-kube-api-access-ndmcr" (OuterVolumeSpecName: "kube-api-access-ndmcr") pod "baec8a62-37d3-41ac-a6ab-35af85136983" (UID: "baec8a62-37d3-41ac-a6ab-35af85136983"). InnerVolumeSpecName "kube-api-access-ndmcr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.865748    1554 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baec8a62-37d3-41ac-a6ab-35af85136983-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "baec8a62-37d3-41ac-a6ab-35af85136983" (UID: "baec8a62-37d3-41ac-a6ab-35af85136983"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.965134    1554 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/baec8a62-37d3-41ac-a6ab-35af85136983-webhook-cert\") on node \"addons-443778\" DevicePath \"\""
	Dec 21 18:10:09 addons-443778 kubelet[1554]: I1221 18:10:09.965190    1554 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ndmcr\" (UniqueName: \"kubernetes.io/projected/baec8a62-37d3-41ac-a6ab-35af85136983-kube-api-access-ndmcr\") on node \"addons-443778\" DevicePath \"\""
	Dec 21 18:10:10 addons-443778 kubelet[1554]: I1221 18:10:10.005073    1554 scope.go:117] "RemoveContainer" containerID="fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747"
	Dec 21 18:10:10 addons-443778 kubelet[1554]: I1221 18:10:10.018758    1554 scope.go:117] "RemoveContainer" containerID="fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747"
	Dec 21 18:10:10 addons-443778 kubelet[1554]: E1221 18:10:10.019074    1554 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747\": container with ID starting with fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747 not found: ID does not exist" containerID="fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747"
	Dec 21 18:10:10 addons-443778 kubelet[1554]: I1221 18:10:10.019120    1554 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747"} err="failed to get container status \"fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747\": rpc error: code = NotFound desc = could not find container \"fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747\": container with ID starting with fe3517512c798004e4673740e56ef8a16c2744f5369c4287b5aa8a5152915747 not found: ID does not exist"
	Dec 21 18:10:11 addons-443778 kubelet[1554]: I1221 18:10:11.799899    1554 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="baec8a62-37d3-41ac-a6ab-35af85136983" path="/var/lib/kubelet/pods/baec8a62-37d3-41ac-a6ab-35af85136983/volumes"
	
	
	==> storage-provisioner [2cdeb7ccae3f3c0b59880eb8e867453e325e3d49abe3c4ccb6f06151e57aebcb] <==
	I1221 18:06:12.287086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 18:06:12.295703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 18:06:12.295756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 18:06:12.301756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 18:06:12.301812       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c05167d1-a432-484e-abd0-b91be21e63e3", APIVersion:"v1", ResourceVersion:"937", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-443778_48c1b69d-0591-42fc-989f-aab25afd255d became leader
	I1221 18:06:12.301918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-443778_48c1b69d-0591-42fc-989f-aab25afd255d!
	I1221 18:06:12.403059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-443778_48c1b69d-0591-42fc-989f-aab25afd255d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-443778 -n addons-443778
helpers_test.go:261: (dbg) Run:  kubectl --context addons-443778 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh pgrep buildkitd: exit status 1 (252.023635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image build -t localhost/my-image:functional-209430 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 image build -t localhost/my-image:functional-209430 testdata/build --alsologtostderr: (4.343026019s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-209430 image build -t localhost/my-image:functional-209430 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e9e6520af2c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-209430
--> 769dfb2ef38
Successfully tagged localhost/my-image:functional-209430
769dfb2ef38211180200c9b7d26f04ede7079a47c6134ec0597f098b52f567de
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-209430 image build -t localhost/my-image:functional-209430 testdata/build --alsologtostderr:
I1221 18:14:00.534427   55527 out.go:296] Setting OutFile to fd 1 ...
I1221 18:14:00.534647   55527 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.534676   55527 out.go:309] Setting ErrFile to fd 2...
I1221 18:14:00.534694   55527 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.534998   55527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
I1221 18:14:00.535788   55527 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.536309   55527 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.536854   55527 cli_runner.go:164] Run: docker container inspect functional-209430 --format={{.State.Status}}
I1221 18:14:00.553480   55527 ssh_runner.go:195] Run: systemctl --version
I1221 18:14:00.553539   55527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209430
I1221 18:14:00.570880   55527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/functional-209430/id_rsa Username:docker}
I1221 18:14:00.657328   55527 build_images.go:151] Building image from path: /tmp/build.2203221237.tar
I1221 18:14:00.657414   55527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1221 18:14:00.664900   55527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2203221237.tar
I1221 18:14:00.668137   55527 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2203221237.tar: stat -c "%s %y" /var/lib/minikube/build/build.2203221237.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2203221237.tar': No such file or directory
I1221 18:14:00.668158   55527 ssh_runner.go:362] scp /tmp/build.2203221237.tar --> /var/lib/minikube/build/build.2203221237.tar (3072 bytes)
I1221 18:14:00.689302   55527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2203221237
I1221 18:14:00.697215   55527 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2203221237 -xf /var/lib/minikube/build/build.2203221237.tar
I1221 18:14:00.705265   55527 crio.go:297] Building image: /var/lib/minikube/build/build.2203221237
I1221 18:14:00.705322   55527 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-209430 /var/lib/minikube/build/build.2203221237 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1221 18:14:04.793881   55527 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-209430 /var/lib/minikube/build/build.2203221237 --cgroup-manager=cgroupfs: (4.088533821s)
I1221 18:14:04.793943   55527 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2203221237
I1221 18:14:04.802767   55527 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2203221237.tar
I1221 18:14:04.810857   55527 build_images.go:207] Built localhost/my-image:functional-209430 from /tmp/build.2203221237.tar
I1221 18:14:04.810885   55527 build_images.go:123] succeeded building to: functional-209430
I1221 18:14:04.810890   55527 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 image ls: (2.381870049s)
functional_test.go:442: expected "localhost/my-image:functional-209430" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-341255 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-341255 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.640281213s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-341255 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-341255 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [068b3de9-f7bd-4e14-9c70-b8df991fe200] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [068b3de9-f7bd-4e14-9c70-b8df991fe200] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.002618854s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1221 18:17:19.987534   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:17:47.671441   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:18:25.694499   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:25.699757   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:25.710030   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:25.730291   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:25.770576   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:25.850869   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:26.011252   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:26.331787   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:26.972658   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:28.253210   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:30.813356   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:18:35.933663   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-341255 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.41080269s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-341255 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1221 18:18:46.173930   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.008062129s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons disable ingress --alsologtostderr -v=1: (7.36595337s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-341255
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-341255:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59",
	        "Created": "2023-12-21T18:14:48.112925261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 57288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:14:48.402457897Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aaeab328720c5f9c5998a41dcf23df3cc1d95a0c58c535e504f0d445f5dfad94",
	        "ResolvConfPath": "/var/lib/docker/containers/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59/hosts",
	        "LogPath": "/var/lib/docker/containers/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59-json.log",
	        "Name": "/ingress-addon-legacy-341255",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-341255:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-341255",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/343e0ee09284ac67ded71787184ca54d42d2717e16a4a23a2d77a968f12480c5-init/diff:/var/lib/docker/overlay2/5f93c210e62b94f4976b2a81580f0bf0da95be40a907596ee84a499ee959f455/diff",
	                "MergedDir": "/var/lib/docker/overlay2/343e0ee09284ac67ded71787184ca54d42d2717e16a4a23a2d77a968f12480c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/343e0ee09284ac67ded71787184ca54d42d2717e16a4a23a2d77a968f12480c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/343e0ee09284ac67ded71787184ca54d42d2717e16a4a23a2d77a968f12480c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-341255",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-341255/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-341255",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-341255",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-341255",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1422817d07c82d802aeb4002df0ecb53eee3dfb3cd42e0a9dc14ae8b8618df4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1422817d07c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-341255": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1f866a6ce2e5",
	                        "ingress-addon-legacy-341255"
	                    ],
	                    "NetworkID": "2722de9e68f08762762626fbf47a82fd5f62c9f90de5091e4c74f74fac3456c3",
	                    "EndpointID": "752eb571ca44a3397ad4f202be53631a3ae674d5d4093e1d25c2c2ed42d021bd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-341255 -n ingress-addon-legacy-341255
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-341255 logs -n 25: (1.016135449s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-209430 image save                                                 | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-209430                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430 image rm                                                   | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-209430                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430 image ls                                                   | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| image          | functional-209430 image load                                                 | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430 image ls                                                   | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	| image          | functional-209430 image save --daemon                                        | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-209430                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| service        | functional-209430 service                                                    | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | hello-node-connect --url                                                     |                             |         |         |                     |                     |
	| update-context | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:13 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:13 UTC | 21 Dec 23 18:14 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-209430 ssh pgrep                                                  | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430 image build -t                                             | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	|                | localhost/my-image:functional-209430                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-209430                                                            | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-209430 image ls                                                   | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	| delete         | -p functional-209430                                                         | functional-209430           | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:14 UTC |
	| start          | -p ingress-addon-legacy-341255                                               | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:14 UTC | 21 Dec 23 18:15 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-341255                                                  | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:15 UTC | 21 Dec 23 18:16 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-341255                                                  | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC | 21 Dec 23 18:16 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-341255                                                  | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:16 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-341255 ip                                               | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:18 UTC | 21 Dec 23 18:18 UTC |
	| addons         | ingress-addon-legacy-341255                                                  | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:18 UTC | 21 Dec 23 18:18 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-341255                                                  | ingress-addon-legacy-341255 | jenkins | v1.32.0 | 21 Dec 23 18:18 UTC | 21 Dec 23 18:19 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:14:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:14:22.183339   56640 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:14:22.183591   56640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:14:22.183599   56640 out.go:309] Setting ErrFile to fd 2...
	I1221 18:14:22.183604   56640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:14:22.183773   56640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:14:22.184360   56640 out.go:303] Setting JSON to false
	I1221 18:14:22.185272   56640 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3409,"bootTime":1703179053,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:14:22.185334   56640 start.go:138] virtualization: kvm guest
	I1221 18:14:22.187310   56640 out.go:177] * [ingress-addon-legacy-341255] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:14:22.189170   56640 notify.go:220] Checking for updates...
	I1221 18:14:22.190500   56640 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:14:22.191817   56640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:14:22.193092   56640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:14:22.194495   56640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:14:22.195738   56640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:14:22.196942   56640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:14:22.198409   56640 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:14:22.220112   56640 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:14:22.220218   56640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:14:22.270200   56640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-21 18:14:22.262149816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:14:22.270293   56640 docker.go:295] overlay module found
	I1221 18:14:22.272015   56640 out.go:177] * Using the docker driver based on user configuration
	I1221 18:14:22.273315   56640 start.go:298] selected driver: docker
	I1221 18:14:22.273333   56640 start.go:902] validating driver "docker" against <nil>
	I1221 18:14:22.273344   56640 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:14:22.274059   56640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:14:22.326164   56640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-21 18:14:22.318344883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:14:22.326325   56640 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:14:22.326538   56640 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 18:14:22.328204   56640 out.go:177] * Using Docker driver with root privileges
	I1221 18:14:22.329566   56640 cni.go:84] Creating CNI manager for ""
	I1221 18:14:22.329591   56640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:14:22.329603   56640 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 18:14:22.329616   56640 start_flags.go:323] config:
	{Name:ingress-addon-legacy-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-341255 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:14:22.331052   56640 out.go:177] * Starting control plane node ingress-addon-legacy-341255 in cluster ingress-addon-legacy-341255
	I1221 18:14:22.332413   56640 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:14:22.333691   56640 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:14:22.334967   56640 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1221 18:14:22.334991   56640 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:14:22.351221   56640 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:14:22.351243   56640 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1221 18:14:22.453531   56640 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1221 18:14:22.453569   56640 cache.go:56] Caching tarball of preloaded images
	I1221 18:14:22.453731   56640 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1221 18:14:22.455536   56640 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1221 18:14:22.456871   56640 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:14:22.577180   56640 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1221 18:14:39.847785   56640 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:14:39.847883   56640 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:14:40.987280   56640 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1221 18:14:40.987616   56640 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/config.json ...
	I1221 18:14:40.987642   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/config.json: {Name:mkb0117d5bbfa1e6bffeea9444031661d0041e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:40.987789   56640 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:14:40.987808   56640 start.go:365] acquiring machines lock for ingress-addon-legacy-341255: {Name:mk4f84c194536d18df1b9ad7cecf4990a15e2837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:14:40.987850   56640 start.go:369] acquired machines lock for "ingress-addon-legacy-341255" in 30.984µs
	I1221 18:14:40.987868   56640 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-341255 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:14:40.987931   56640 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:14:40.990436   56640 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1221 18:14:40.990648   56640 start.go:159] libmachine.API.Create for "ingress-addon-legacy-341255" (driver="docker")
	I1221 18:14:40.990681   56640 client.go:168] LocalClient.Create starting
	I1221 18:14:40.990749   56640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem
	I1221 18:14:40.990776   56640 main.go:141] libmachine: Decoding PEM data...
	I1221 18:14:40.990791   56640 main.go:141] libmachine: Parsing certificate...
	I1221 18:14:40.990839   56640 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem
	I1221 18:14:40.990859   56640 main.go:141] libmachine: Decoding PEM data...
	I1221 18:14:40.990870   56640 main.go:141] libmachine: Parsing certificate...
	I1221 18:14:40.991146   56640 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:14:41.006125   56640 cli_runner.go:211] docker network inspect ingress-addon-legacy-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:14:41.006183   56640 network_create.go:281] running [docker network inspect ingress-addon-legacy-341255] to gather additional debugging logs...
	I1221 18:14:41.006200   56640 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-341255
	W1221 18:14:41.020058   56640 cli_runner.go:211] docker network inspect ingress-addon-legacy-341255 returned with exit code 1
	I1221 18:14:41.020086   56640 network_create.go:284] error running [docker network inspect ingress-addon-legacy-341255]: docker network inspect ingress-addon-legacy-341255: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-341255 not found
	I1221 18:14:41.020099   56640 network_create.go:286] output of [docker network inspect ingress-addon-legacy-341255]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-341255 not found
	
	** /stderr **
	I1221 18:14:41.020184   56640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:14:41.034649   56640 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00269f040}
	I1221 18:14:41.034681   56640 network_create.go:124] attempt to create docker network ingress-addon-legacy-341255 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1221 18:14:41.034722   56640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-341255 ingress-addon-legacy-341255
	I1221 18:14:41.084003   56640 network_create.go:108] docker network ingress-addon-legacy-341255 192.168.49.0/24 created
	I1221 18:14:41.084040   56640 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-341255" container
	I1221 18:14:41.084105   56640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:14:41.098180   56640 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-341255 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-341255 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:14:41.113763   56640 oci.go:103] Successfully created a docker volume ingress-addon-legacy-341255
	I1221 18:14:41.113836   56640 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-341255 --entrypoint /usr/bin/test -v ingress-addon-legacy-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:14:42.834504   56640 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-341255-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-341255 --entrypoint /usr/bin/test -v ingress-addon-legacy-341255:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.720631149s)
	I1221 18:14:42.834537   56640 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-341255
	I1221 18:14:42.834552   56640 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1221 18:14:42.834571   56640 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:14:42.834626   56640 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:14:48.049843   56640 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-341255:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.215158639s)
	I1221 18:14:48.049878   56640 kic.go:203] duration metric: took 5.215306 seconds to extract preloaded images to volume
	W1221 18:14:48.050016   56640 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:14:48.050095   56640 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:14:48.099277   56640 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-341255 --name ingress-addon-legacy-341255 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-341255 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-341255 --network ingress-addon-legacy-341255 --ip 192.168.49.2 --volume ingress-addon-legacy-341255:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:14:48.410558   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Running}}
	I1221 18:14:48.426949   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:14:48.444346   56640 cli_runner.go:164] Run: docker exec ingress-addon-legacy-341255 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:14:48.510311   56640 oci.go:144] the created container "ingress-addon-legacy-341255" has a running status.
	I1221 18:14:48.510346   56640 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa...
	I1221 18:14:48.739284   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1221 18:14:48.739338   56640 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:14:48.759837   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:14:48.777671   56640 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:14:48.777691   56640 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-341255 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:14:48.843393   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:14:48.863070   56640 machine.go:88] provisioning docker machine ...
	I1221 18:14:48.863108   56640 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-341255"
	I1221 18:14:48.863170   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:48.878416   56640 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:48.878762   56640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1221 18:14:48.878778   56640 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-341255 && echo "ingress-addon-legacy-341255" | sudo tee /etc/hostname
	I1221 18:14:49.058829   56640 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-341255
	
	I1221 18:14:49.058908   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.076182   56640 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:49.076502   56640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1221 18:14:49.076522   56640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-341255' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-341255/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-341255' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:14:49.188956   56640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:14:49.188984   56640 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:14:49.189001   56640 ubuntu.go:177] setting up certificates
	I1221 18:14:49.189011   56640 provision.go:83] configureAuth start
	I1221 18:14:49.189061   56640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-341255
	I1221 18:14:49.205041   56640 provision.go:138] copyHostCerts
	I1221 18:14:49.205078   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:14:49.205115   56640 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem, removing ...
	I1221 18:14:49.205124   56640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:14:49.205182   56640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:14:49.205286   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:14:49.205310   56640 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem, removing ...
	I1221 18:14:49.205317   56640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:14:49.205345   56640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:14:49.205393   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:14:49.205411   56640 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem, removing ...
	I1221 18:14:49.205417   56640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:14:49.205437   56640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:14:49.205490   56640 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-341255 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-341255]
	I1221 18:14:49.346369   56640 provision.go:172] copyRemoteCerts
	I1221 18:14:49.346422   56640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:14:49.346460   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.361916   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:14:49.445260   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1221 18:14:49.445326   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 18:14:49.465640   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1221 18:14:49.465694   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:14:49.485503   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1221 18:14:49.485555   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1221 18:14:49.504971   56640 provision.go:86] duration metric: configureAuth took 315.949883ms
	I1221 18:14:49.504996   56640 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:14:49.505168   56640 config.go:182] Loaded profile config "ingress-addon-legacy-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1221 18:14:49.505303   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.521134   56640 main.go:141] libmachine: Using SSH client type: native
	I1221 18:14:49.521468   56640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1221 18:14:49.521487   56640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:14:49.734517   56640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:14:49.734549   56640 machine.go:91] provisioned docker machine in 871.454576ms
	I1221 18:14:49.734560   56640 client.go:171] LocalClient.Create took 8.743872049s
	I1221 18:14:49.734587   56640 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-341255" took 8.743938903s
	I1221 18:14:49.734601   56640 start.go:300] post-start starting for "ingress-addon-legacy-341255" (driver="docker")
	I1221 18:14:49.734614   56640 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:14:49.734667   56640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:14:49.734714   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.750871   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:14:49.833296   56640 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:14:49.836219   56640 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:14:49.836244   56640 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:14:49.836253   56640 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:14:49.836263   56640 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:14:49.836273   56640 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:14:49.836323   56640 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:14:49.836404   56640 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> 166642.pem in /etc/ssl/certs
	I1221 18:14:49.836414   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /etc/ssl/certs/166642.pem
	I1221 18:14:49.836516   56640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:14:49.843779   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:14:49.864809   56640 start.go:303] post-start completed in 130.191561ms
	I1221 18:14:49.865157   56640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-341255
	I1221 18:14:49.881369   56640 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/config.json ...
	I1221 18:14:49.881619   56640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:14:49.881673   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.896774   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:14:49.977612   56640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:14:49.981430   56640 start.go:128] duration metric: createHost completed in 8.993486152s
	I1221 18:14:49.981465   56640 start.go:83] releasing machines lock for "ingress-addon-legacy-341255", held for 8.993603017s
	I1221 18:14:49.981530   56640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-341255
	I1221 18:14:49.996780   56640 ssh_runner.go:195] Run: cat /version.json
	I1221 18:14:49.996838   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:49.996849   56640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:14:49.996910   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:14:50.012850   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:14:50.013948   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:14:50.180203   56640 ssh_runner.go:195] Run: systemctl --version
	I1221 18:14:50.184012   56640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:14:50.318158   56640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:14:50.322028   56640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:14:50.338393   56640 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:14:50.338468   56640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:14:50.362746   56640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:14:50.362771   56640 start.go:475] detecting cgroup driver to use...
	I1221 18:14:50.362804   56640 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:14:50.362854   56640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:14:50.375397   56640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:14:50.384598   56640 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:14:50.384642   56640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:14:50.395806   56640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:14:50.407433   56640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 18:14:50.482024   56640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:14:50.557824   56640 docker.go:219] disabling docker service ...
	I1221 18:14:50.557878   56640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:14:50.574036   56640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:14:50.583572   56640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:14:50.661118   56640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:14:50.737041   56640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:14:50.747045   56640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:14:50.760498   56640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1221 18:14:50.760571   56640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:14:50.768630   56640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 18:14:50.768687   56640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:14:50.776629   56640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:14:50.784507   56640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:14:50.792595   56640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:14:50.800019   56640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:14:50.807034   56640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:14:50.813925   56640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:14:50.892257   56640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 18:14:50.988230   56640 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 18:14:50.988289   56640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 18:14:50.991369   56640 start.go:543] Will wait 60s for crictl version
	I1221 18:14:50.991420   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:50.994295   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:14:51.024208   56640 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1221 18:14:51.024274   56640 ssh_runner.go:195] Run: crio --version
	I1221 18:14:51.056044   56640 ssh_runner.go:195] Run: crio --version
	I1221 18:14:51.088238   56640 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1221 18:14:51.089602   56640 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-341255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:14:51.105351   56640 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1221 18:14:51.108648   56640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:14:51.118256   56640 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1221 18:14:51.118316   56640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:14:51.159269   56640 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1221 18:14:51.159321   56640 ssh_runner.go:195] Run: which lz4
	I1221 18:14:51.162336   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1221 18:14:51.162402   56640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1221 18:14:51.165084   56640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1221 18:14:51.165107   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1221 18:14:52.047704   56640 crio.go:444] Took 0.885320 seconds to copy over tarball
	I1221 18:14:52.047767   56640 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1221 18:14:54.224421   56640 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176632711s)
	I1221 18:14:54.224450   56640 crio.go:451] Took 2.176720 seconds to extract the tarball
	I1221 18:14:54.224474   56640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1221 18:14:54.290459   56640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:14:54.319825   56640 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1221 18:14:54.319847   56640 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1221 18:14:54.319906   56640 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:54.319922   56640 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:54.319947   56640 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:54.319971   56640 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:54.319994   56640 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:54.320118   56640 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1221 18:14:54.320174   56640 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:54.320179   56640 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:54.321173   56640 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:54.321241   56640 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:54.321278   56640 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1221 18:14:54.321176   56640 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:54.321174   56640 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:54.321341   56640 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:54.321173   56640 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:54.321174   56640 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:54.467081   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:54.474647   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:54.474914   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:54.479655   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:54.480012   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:54.507351   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1221 18:14:54.511359   56640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1221 18:14:54.511458   56640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:54.511494   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.518136   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1221 18:14:54.587885   56640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1221 18:14:54.587971   56640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1221 18:14:54.588014   56640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:54.588066   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.587978   56640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:54.588144   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.592919   56640 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1221 18:14:54.592953   56640 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:54.593001   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.594628   56640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1221 18:14:54.594662   56640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:54.594712   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.615304   56640 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1221 18:14:54.615348   56640 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1221 18:14:54.615370   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1221 18:14:54.615383   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.615392   56640 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1221 18:14:54.615421   56640 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1221 18:14:54.615450   56640 ssh_runner.go:195] Run: which crictl
	I1221 18:14:54.615458   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1221 18:14:54.615490   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1221 18:14:54.615534   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1221 18:14:54.615546   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1221 18:14:54.702123   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1221 18:14:54.712804   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1221 18:14:54.712804   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1221 18:14:54.712892   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1221 18:14:54.712901   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1221 18:14:54.712965   56640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1221 18:14:54.713017   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1221 18:14:54.799342   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1221 18:14:54.799378   56640 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1221 18:14:55.243663   56640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:14:55.376981   56640 cache_images.go:92] LoadImages completed in 1.057119031s
	W1221 18:14:55.377104   56640 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1221 18:14:55.377189   56640 ssh_runner.go:195] Run: crio config
	I1221 18:14:55.416772   56640 cni.go:84] Creating CNI manager for ""
	I1221 18:14:55.416793   56640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:14:55.416806   56640 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:14:55.416821   56640 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-341255 NodeName:ingress-addon-legacy-341255 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1221 18:14:55.416947   56640 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-341255"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:14:55.417023   56640 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-341255 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-341255 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:14:55.417072   56640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1221 18:14:55.424662   56640 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:14:55.424738   56640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:14:55.432005   56640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1221 18:14:55.446901   56640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1221 18:14:55.461486   56640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1221 18:14:55.475771   56640 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:14:55.478613   56640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:14:55.487051   56640 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255 for IP: 192.168.49.2
	I1221 18:14:55.487073   56640 certs.go:190] acquiring lock for shared ca certs: {Name:mk1a19dbb52a881fd398c5196f3505713dce7712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.487193   56640 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key
	I1221 18:14:55.487226   56640 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key
	I1221 18:14:55.487264   56640 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key
	I1221 18:14:55.487281   56640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt with IP's: []
	I1221 18:14:55.614075   56640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt ...
	I1221 18:14:55.614104   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: {Name:mkfa6e76da4f936a45eefdbd318866ea3da03d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.614251   56640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key ...
	I1221 18:14:55.614266   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key: {Name:mkaa7bb35edffe5beeb5985a20f324aa1f6f0797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.614341   56640 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key.dd3b5fb2
	I1221 18:14:55.614361   56640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:14:55.718387   56640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt.dd3b5fb2 ...
	I1221 18:14:55.718409   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt.dd3b5fb2: {Name:mk5a5369918472fd99ed08d32f0a629da67f0beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.718539   56640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key.dd3b5fb2 ...
	I1221 18:14:55.718552   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key.dd3b5fb2: {Name:mk474a7d7acc86ecf95e15bd0cff6e086cb54f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.718618   56640 certs.go:337] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt
	I1221 18:14:55.718689   56640 certs.go:341] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key
	I1221 18:14:55.718738   56640 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.key
	I1221 18:14:55.718752   56640 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.crt with IP's: []
	I1221 18:14:55.817141   56640 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.crt ...
	I1221 18:14:55.817169   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.crt: {Name:mk0fafe70967ab4f569769d1d96d8a5e99a1e35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.817321   56640 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.key ...
	I1221 18:14:55.817335   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.key: {Name:mk82fbc3d832fff59047cbbf079f7983978dd0ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:14:55.817398   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1221 18:14:55.817421   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1221 18:14:55.817432   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1221 18:14:55.817444   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1221 18:14:55.817453   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1221 18:14:55.817468   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1221 18:14:55.817479   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1221 18:14:55.817493   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1221 18:14:55.817541   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem (1338 bytes)
	W1221 18:14:55.817575   56640 certs.go:433] ignoring /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664_empty.pem, impossibly tiny 0 bytes
	I1221 18:14:55.817585   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem (1679 bytes)
	I1221 18:14:55.817609   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem (1078 bytes)
	I1221 18:14:55.817633   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:14:55.817654   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem (1679 bytes)
	I1221 18:14:55.817692   56640 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:14:55.817718   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem -> /usr/share/ca-certificates/16664.pem
	I1221 18:14:55.817730   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /usr/share/ca-certificates/166642.pem
	I1221 18:14:55.817742   56640 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:55.818340   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:14:55.838736   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1221 18:14:55.858438   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:14:55.878662   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1221 18:14:55.897698   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:14:55.916293   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 18:14:55.935026   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:14:55.953540   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:14:55.972234   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem --> /usr/share/ca-certificates/16664.pem (1338 bytes)
	I1221 18:14:55.990883   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /usr/share/ca-certificates/166642.pem (1708 bytes)
	I1221 18:14:56.009364   56640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:14:56.028059   56640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:14:56.042472   56640 ssh_runner.go:195] Run: openssl version
	I1221 18:14:56.046910   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166642.pem && ln -fs /usr/share/ca-certificates/166642.pem /etc/ssl/certs/166642.pem"
	I1221 18:14:56.054376   56640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166642.pem
	I1221 18:14:56.057177   56640 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 21 18:11 /usr/share/ca-certificates/166642.pem
	I1221 18:14:56.057223   56640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166642.pem
	I1221 18:14:56.062898   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166642.pem /etc/ssl/certs/3ec20f2e.0"
	I1221 18:14:56.070264   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:14:56.077666   56640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:56.080479   56640 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:56.080531   56640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:14:56.086151   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:14:56.093266   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16664.pem && ln -fs /usr/share/ca-certificates/16664.pem /etc/ssl/certs/16664.pem"
	I1221 18:14:56.100354   56640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16664.pem
	I1221 18:14:56.103112   56640 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 21 18:11 /usr/share/ca-certificates/16664.pem
	I1221 18:14:56.103153   56640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16664.pem
	I1221 18:14:56.108573   56640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16664.pem /etc/ssl/certs/51391683.0"
	I1221 18:14:56.115893   56640 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:14:56.118516   56640 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:14:56.118564   56640 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-341255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-341255 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:14:56.118631   56640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 18:14:56.118668   56640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 18:14:56.148358   56640 cri.go:89] found id: ""
	I1221 18:14:56.148408   56640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:14:56.155586   56640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:14:56.162692   56640 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:14:56.162738   56640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:14:56.169666   56640 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:14:56.169717   56640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:14:56.211187   56640 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1221 18:14:56.211233   56640 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:14:56.245479   56640 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:14:56.245568   56640 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1221 18:14:56.245618   56640 kubeadm.go:322] OS: Linux
	I1221 18:14:56.245694   56640 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:14:56.245758   56640 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:14:56.245824   56640 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:14:56.245883   56640 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:14:56.245964   56640 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:14:56.246042   56640 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:14:56.308565   56640 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:14:56.308703   56640 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:14:56.308815   56640 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:14:56.475681   56640 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:14:56.477420   56640 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:14:56.477486   56640 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:14:56.546262   56640 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:14:56.548854   56640 out.go:204]   - Generating certificates and keys ...
	I1221 18:14:56.548960   56640 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:14:56.549085   56640 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:14:56.866402   56640 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:14:57.073664   56640 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:14:57.240474   56640 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:14:57.440201   56640 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:14:57.548887   56640 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:14:57.549084   56640 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:14:57.671653   56640 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:14:57.671855   56640 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-341255 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1221 18:14:57.714397   56640 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:14:57.971748   56640 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:14:58.045016   56640 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:14:58.045097   56640 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:14:58.164753   56640 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:14:58.248346   56640 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:14:58.397970   56640 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:14:58.525324   56640 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:14:58.525959   56640 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:14:58.527927   56640 out.go:204]   - Booting up control plane ...
	I1221 18:14:58.528033   56640 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:14:58.532180   56640 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:14:58.533157   56640 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:14:58.533849   56640 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:14:58.535736   56640 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:15:05.037823   56640 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502068 seconds
	I1221 18:15:05.038007   56640 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:15:05.048329   56640 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:15:05.565102   56640 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:15:05.565329   56640 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-341255 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1221 18:15:06.072357   56640 kubeadm.go:322] [bootstrap-token] Using token: dqfhbq.kr8jbzs75a13i65z
	I1221 18:15:06.073750   56640 out.go:204]   - Configuring RBAC rules ...
	I1221 18:15:06.073883   56640 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:15:06.077011   56640 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:15:06.082526   56640 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:15:06.084321   56640 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:15:06.085982   56640 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:15:06.087516   56640 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:15:06.093875   56640 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:15:06.388082   56640 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:15:06.485355   56640 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:15:06.486429   56640 kubeadm.go:322] 
	I1221 18:15:06.486536   56640 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:15:06.486547   56640 kubeadm.go:322] 
	I1221 18:15:06.486646   56640 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:15:06.486659   56640 kubeadm.go:322] 
	I1221 18:15:06.486702   56640 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:15:06.486822   56640 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:15:06.486906   56640 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:15:06.486920   56640 kubeadm.go:322] 
	I1221 18:15:06.487000   56640 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:15:06.487112   56640 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:15:06.487204   56640 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:15:06.487213   56640 kubeadm.go:322] 
	I1221 18:15:06.487317   56640 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:15:06.487420   56640 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:15:06.487428   56640 kubeadm.go:322] 
	I1221 18:15:06.487525   56640 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dqfhbq.kr8jbzs75a13i65z \
	I1221 18:15:06.487653   56640 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b \
	I1221 18:15:06.487685   56640 kubeadm.go:322]     --control-plane 
	I1221 18:15:06.487692   56640 kubeadm.go:322] 
	I1221 18:15:06.487763   56640 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:15:06.487772   56640 kubeadm.go:322] 
	I1221 18:15:06.487846   56640 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dqfhbq.kr8jbzs75a13i65z \
	I1221 18:15:06.487935   56640 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b 
	I1221 18:15:06.489624   56640 kubeadm.go:322] W1221 18:14:56.210762    1370 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1221 18:15:06.489881   56640 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1221 18:15:06.490016   56640 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:15:06.490151   56640 kubeadm.go:322] W1221 18:14:58.531972    1370 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1221 18:15:06.490269   56640 kubeadm.go:322] W1221 18:14:58.532936    1370 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1221 18:15:06.490279   56640 cni.go:84] Creating CNI manager for ""
	I1221 18:15:06.490287   56640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:15:06.491775   56640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1221 18:15:06.492996   56640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 18:15:06.496545   56640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1221 18:15:06.496570   56640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1221 18:15:06.512367   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 18:15:06.861831   56640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:15:06.861891   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:06.861908   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=ingress-addon-legacy-341255 minikube.k8s.io/updated_at=2023_12_21T18_15_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:06.935584   56640 ops.go:34] apiserver oom_adj: -16
	I1221 18:15:06.935605   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:07.436545   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:07.935975   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:08.436526   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:08.935952   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:09.435760   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:09.936282   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:10.436313   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:10.935866   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:11.436587   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:11.935731   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:12.436683   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:12.936064   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:13.436139   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:13.935957   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:14.435678   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:14.936162   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:15.436577   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:15.936128   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:16.436577   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:16.936291   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:17.436036   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:17.935695   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:18.436071   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:18.936535   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:19.436330   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:19.936651   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:20.436198   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:20.935944   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:21.436279   56640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:15:21.506044   56640 kubeadm.go:1088] duration metric: took 14.644203217s to wait for elevateKubeSystemPrivileges.
	I1221 18:15:21.506093   56640 kubeadm.go:406] StartCluster complete in 25.387527483s
	I1221 18:15:21.506114   56640 settings.go:142] acquiring lock: {Name:mk8e49e823ae84efe44355981045de15cdb79660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:15:21.506170   56640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:15:21.506824   56640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/kubeconfig: {Name:mk377070c6d3dd4bc3f11638f8c446f488cf8c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:15:21.507059   56640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:15:21.507171   56640 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1221 18:15:21.507257   56640 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-341255"
	I1221 18:15:21.507268   56640 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-341255"
	I1221 18:15:21.507284   56640 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-341255"
	I1221 18:15:21.507295   56640 config.go:182] Loaded profile config "ingress-addon-legacy-341255": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1221 18:15:21.507344   56640 host.go:66] Checking if "ingress-addon-legacy-341255" exists ...
	I1221 18:15:21.507303   56640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-341255"
	I1221 18:15:21.507703   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:15:21.507717   56640 kapi.go:59] client config for ingress-addon-legacy-341255: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:21.507801   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:15:21.508554   56640 cert_rotation.go:137] Starting client certificate rotation controller
	I1221 18:15:21.531166   56640 kapi.go:59] client config for ingress-addon-legacy-341255: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:21.531430   56640 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-341255"
	I1221 18:15:21.531469   56640 host.go:66] Checking if "ingress-addon-legacy-341255" exists ...
	I1221 18:15:21.531953   56640 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-341255 --format={{.State.Status}}
	I1221 18:15:21.540186   56640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:15:21.541814   56640 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:15:21.541833   56640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:15:21.541896   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:15:21.552209   56640 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:15:21.552226   56640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:15:21.552270   56640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-341255
	I1221 18:15:21.560110   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:15:21.566804   56640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/ingress-addon-legacy-341255/id_rsa Username:docker}
	I1221 18:15:21.689184   56640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:15:21.707160   56640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:15:21.709689   56640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:15:22.017291   56640 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-341255" context rescaled to 1 replicas
	I1221 18:15:22.017341   56640 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:15:22.019325   56640 out.go:177] * Verifying Kubernetes components...
	I1221 18:15:22.021052   56640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:15:22.189770   56640 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1221 18:15:22.310198   56640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1221 18:15:22.311531   56640 addons.go:508] enable addons completed in 804.357567ms: enabled=[default-storageclass storage-provisioner]
	I1221 18:15:22.309222   56640 kapi.go:59] client config for ingress-addon-legacy-341255: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:15:22.311818   56640 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-341255" to be "Ready" ...
	I1221 18:15:24.315569   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:26.815438   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:29.315547   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:31.815659   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:34.314999   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:36.315043   56640 node_ready.go:58] node "ingress-addon-legacy-341255" has status "Ready":"False"
	I1221 18:15:36.814563   56640 node_ready.go:49] node "ingress-addon-legacy-341255" has status "Ready":"True"
	I1221 18:15:36.814588   56640 node_ready.go:38] duration metric: took 14.502742114s waiting for node "ingress-addon-legacy-341255" to be "Ready" ...
	I1221 18:15:36.814600   56640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:15:36.820380   56640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-6prs4" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:38.823020   56640 pod_ready.go:102] pod "coredns-66bff467f8-6prs4" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-21 18:15:21 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1221 18:15:40.825390   56640 pod_ready.go:102] pod "coredns-66bff467f8-6prs4" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:42.825496   56640 pod_ready.go:102] pod "coredns-66bff467f8-6prs4" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:44.826140   56640 pod_ready.go:102] pod "coredns-66bff467f8-6prs4" in "kube-system" namespace has status "Ready":"False"
	I1221 18:15:45.826120   56640 pod_ready.go:92] pod "coredns-66bff467f8-6prs4" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:45.826145   56640 pod_ready.go:81] duration metric: took 9.005739575s waiting for pod "coredns-66bff467f8-6prs4" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.826162   56640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.829667   56640 pod_ready.go:92] pod "etcd-ingress-addon-legacy-341255" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:45.829687   56640 pod_ready.go:81] duration metric: took 3.515984ms waiting for pod "etcd-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.829702   56640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.833378   56640 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-341255" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:45.833396   56640 pod_ready.go:81] duration metric: took 3.686973ms waiting for pod "kube-apiserver-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.833403   56640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.837043   56640 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-341255" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:45.837063   56640 pod_ready.go:81] duration metric: took 3.653299ms waiting for pod "kube-controller-manager-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.837075   56640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8sw6v" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.840585   56640 pod_ready.go:92] pod "kube-proxy-8sw6v" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:45.840603   56640 pod_ready.go:81] duration metric: took 3.52249ms waiting for pod "kube-proxy-8sw6v" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:45.840611   56640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:46.021994   56640 request.go:629] Waited for 181.299869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-341255
	I1221 18:15:46.221842   56640 request.go:629] Waited for 197.372314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-341255
	I1221 18:15:46.224418   56640 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-341255" in "kube-system" namespace has status "Ready":"True"
	I1221 18:15:46.224441   56640 pod_ready.go:81] duration metric: took 383.822274ms waiting for pod "kube-scheduler-ingress-addon-legacy-341255" in "kube-system" namespace to be "Ready" ...
	I1221 18:15:46.224458   56640 pod_ready.go:38] duration metric: took 9.409846411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:15:46.224478   56640 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:15:46.224554   56640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:15:46.234708   56640 api_server.go:72] duration metric: took 24.217327807s to wait for apiserver process to appear ...
	I1221 18:15:46.234731   56640 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:15:46.234748   56640 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1221 18:15:46.239104   56640 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1221 18:15:46.239769   56640 api_server.go:141] control plane version: v1.18.20
	I1221 18:15:46.239788   56640 api_server.go:131] duration metric: took 5.051994ms to wait for apiserver health ...
	I1221 18:15:46.239796   56640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:15:46.422204   56640 request.go:629] Waited for 182.334157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:15:46.427020   56640 system_pods.go:59] 8 kube-system pods found
	I1221 18:15:46.427043   56640 system_pods.go:61] "coredns-66bff467f8-6prs4" [02f92573-9069-4672-b386-d8f7ec9f42ad] Running
	I1221 18:15:46.427048   56640 system_pods.go:61] "etcd-ingress-addon-legacy-341255" [4a94bc34-210c-41c7-bb1a-1c58455d61d0] Running
	I1221 18:15:46.427052   56640 system_pods.go:61] "kindnet-mblns" [4b6bb6f2-1c08-4d65-bac3-1c31643c4a6c] Running
	I1221 18:15:46.427056   56640 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-341255" [679d9051-b8ae-44c9-bc62-3999218e0401] Running
	I1221 18:15:46.427062   56640 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-341255" [cb42350a-3bbe-4f3f-ad87-26360224397a] Running
	I1221 18:15:46.427066   56640 system_pods.go:61] "kube-proxy-8sw6v" [388309e6-6119-467c-8eac-2b9e46bf66b8] Running
	I1221 18:15:46.427070   56640 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-341255" [17c300ff-4a66-4317-a84f-c451d6ca7ab1] Running
	I1221 18:15:46.427074   56640 system_pods.go:61] "storage-provisioner" [3de43818-a4a4-4bf2-9dad-78e75544b274] Running
	I1221 18:15:46.427079   56640 system_pods.go:74] duration metric: took 187.278311ms to wait for pod list to return data ...
	I1221 18:15:46.427093   56640 default_sa.go:34] waiting for default service account to be created ...
	I1221 18:15:46.622417   56640 request.go:629] Waited for 195.235671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1221 18:15:46.624605   56640 default_sa.go:45] found service account: "default"
	I1221 18:15:46.624633   56640 default_sa.go:55] duration metric: took 197.53323ms for default service account to be created ...
	I1221 18:15:46.624645   56640 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 18:15:46.822080   56640 request.go:629] Waited for 197.342367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:15:46.826988   56640 system_pods.go:86] 8 kube-system pods found
	I1221 18:15:46.827009   56640 system_pods.go:89] "coredns-66bff467f8-6prs4" [02f92573-9069-4672-b386-d8f7ec9f42ad] Running
	I1221 18:15:46.827015   56640 system_pods.go:89] "etcd-ingress-addon-legacy-341255" [4a94bc34-210c-41c7-bb1a-1c58455d61d0] Running
	I1221 18:15:46.827019   56640 system_pods.go:89] "kindnet-mblns" [4b6bb6f2-1c08-4d65-bac3-1c31643c4a6c] Running
	I1221 18:15:46.827023   56640 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-341255" [679d9051-b8ae-44c9-bc62-3999218e0401] Running
	I1221 18:15:46.827028   56640 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-341255" [cb42350a-3bbe-4f3f-ad87-26360224397a] Running
	I1221 18:15:46.827031   56640 system_pods.go:89] "kube-proxy-8sw6v" [388309e6-6119-467c-8eac-2b9e46bf66b8] Running
	I1221 18:15:46.827038   56640 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-341255" [17c300ff-4a66-4317-a84f-c451d6ca7ab1] Running
	I1221 18:15:46.827042   56640 system_pods.go:89] "storage-provisioner" [3de43818-a4a4-4bf2-9dad-78e75544b274] Running
	I1221 18:15:46.827048   56640 system_pods.go:126] duration metric: took 202.398457ms to wait for k8s-apps to be running ...
	I1221 18:15:46.827057   56640 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:15:46.827097   56640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:15:46.837289   56640 system_svc.go:56] duration metric: took 10.22411ms WaitForService to wait for kubelet.
	I1221 18:15:46.837311   56640 kubeadm.go:581] duration metric: took 24.819932674s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:15:46.837332   56640 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:15:47.021677   56640 request.go:629] Waited for 184.273131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1221 18:15:47.024217   56640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 18:15:47.024241   56640 node_conditions.go:123] node cpu capacity is 8
	I1221 18:15:47.024252   56640 node_conditions.go:105] duration metric: took 186.916329ms to run NodePressure ...
	I1221 18:15:47.024262   56640 start.go:228] waiting for startup goroutines ...
	I1221 18:15:47.024268   56640 start.go:233] waiting for cluster config update ...
	I1221 18:15:47.024281   56640 start.go:242] writing updated cluster config ...
	I1221 18:15:47.024530   56640 ssh_runner.go:195] Run: rm -f paused
	I1221 18:15:47.068670   56640 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1221 18:15:47.070738   56640 out.go:177] 
	W1221 18:15:47.072106   56640 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1221 18:15:47.073385   56640 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1221 18:15:47.074734   56640 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-341255" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 18:18:43 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:43.446407615Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-twn2p/hello-world-app" id=7706cb6c-d149-4a96-97e3-8f7a3f4b1c5d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 21 18:18:43 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:43.446516086Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 21 18:18:43 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:43.531565726Z" level=info msg="Created container 3d5473c016f8bac330803c2afd2c63850495b18b91277c4f615654a8b5584c49: default/hello-world-app-5f5d8b66bb-twn2p/hello-world-app" id=7706cb6c-d149-4a96-97e3-8f7a3f4b1c5d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Dec 21 18:18:43 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:43.532074572Z" level=info msg="Starting container: 3d5473c016f8bac330803c2afd2c63850495b18b91277c4f615654a8b5584c49" id=6eecf8c1-0cad-42df-a82e-fc57b441daaf name=/runtime.v1alpha2.RuntimeService/StartContainer
	Dec 21 18:18:43 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:43.538870794Z" level=info msg="Started container" PID=4863 containerID=3d5473c016f8bac330803c2afd2c63850495b18b91277c4f615654a8b5584c49 description=default/hello-world-app-5f5d8b66bb-twn2p/hello-world-app id=6eecf8c1-0cad-42df-a82e-fc57b441daaf name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=bd59ef884298534690e97ec6dbc8272f67a20dfa0546b1faa8379a093b594efd
	Dec 21 18:18:50 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:50.692064303Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=e6f2c0ae-8122-4b1b-b5f3-945e79dc0670 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 21 18:18:56 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:56.691737100Z" level=info msg="Stopping pod sandbox: 08333623bbbd86c6b33b0672b2e53382cac2b8bc5e1e1fd51e9eb7ba29a1b80b" id=f463993d-2608-4468-9ddd-9a8068f59b70 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 21 18:18:56 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:56.692640571Z" level=info msg="Stopped pod sandbox: 08333623bbbd86c6b33b0672b2e53382cac2b8bc5e1e1fd51e9eb7ba29a1b80b" id=f463993d-2608-4468-9ddd-9a8068f59b70 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 21 18:18:57 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:57.444271751Z" level=info msg="Stopping container: a7ad412c281aac7875e7f0175295df34a67ed26722a34abe2e9104a71597fa07 (timeout: 2s)" id=cbb45589-3d1c-4875-8b54-5598e3bd7292 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 21 18:18:57 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:57.446485554Z" level=info msg="Stopping container: a7ad412c281aac7875e7f0175295df34a67ed26722a34abe2e9104a71597fa07 (timeout: 2s)" id=72d6c35a-271c-4e64-bbdc-e0ecfebd451e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.452219752Z" level=warning msg="Stopping container a7ad412c281aac7875e7f0175295df34a67ed26722a34abe2e9104a71597fa07 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=cbb45589-3d1c-4875-8b54-5598e3bd7292 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 21 18:18:59 ingress-addon-legacy-341255 conmon[3397]: conmon a7ad412c281aac7875e7 <ninfo>: container 3409 exited with status 137
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.574085853Z" level=info msg="Stopped container a7ad412c281aac7875e7f0175295df34a67ed26722a34abe2e9104a71597fa07: ingress-nginx/ingress-nginx-controller-7fcf777cb7-qtqvb/controller" id=cbb45589-3d1c-4875-8b54-5598e3bd7292 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.574110481Z" level=info msg="Stopped container a7ad412c281aac7875e7f0175295df34a67ed26722a34abe2e9104a71597fa07: ingress-nginx/ingress-nginx-controller-7fcf777cb7-qtqvb/controller" id=72d6c35a-271c-4e64-bbdc-e0ecfebd451e name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.574695264Z" level=info msg="Stopping pod sandbox: 7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4" id=c666902e-17e2-4b26-bf2a-b53c29e00e2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.574827744Z" level=info msg="Stopping pod sandbox: 7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4" id=df38a3f1-5230-4420-8280-a777457a48b9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.577352861Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-I275XFXAHKMMNZFY - [0:0]\n:KUBE-HP-2IRUHT5SV2RSEUDO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-I275XFXAHKMMNZFY\n-X KUBE-HP-2IRUHT5SV2RSEUDO\nCOMMIT\n"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.578506035Z" level=info msg="Closing host port tcp:80"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.578547429Z" level=info msg="Closing host port tcp:443"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.579473811Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.579494177Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.579640210Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-qtqvb Namespace:ingress-nginx ID:7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4 UID:1e2b2789-be94-46b5-910d-0eebfdcb0a8b NetNS:/var/run/netns/84b4624b-8b7a-4542-9f52-63f8c4f3b4f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.579789189Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-qtqvb from CNI network \"kindnet\" (type=ptp)"
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.614658390Z" level=info msg="Stopped pod sandbox: 7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4" id=c666902e-17e2-4b26-bf2a-b53c29e00e2a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 21 18:18:59 ingress-addon-legacy-341255 crio[952]: time="2023-12-21 18:18:59.614773624Z" level=info msg="Stopped pod sandbox (already stopped): 7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4" id=df38a3f1-5230-4420-8280-a777457a48b9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d5473c016f8b       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            21 seconds ago      Running             hello-world-app           0                   bd59ef8842985       hello-world-app-5f5d8b66bb-twn2p
	d06e96dea78a4       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   ab9e2f44a3b98       nginx
	a7ad412c281aa       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   7d5df0420d6a7       ingress-nginx-controller-7fcf777cb7-qtqvb
	48ba6fc3f7eae       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   c6e5ab16b833e       ingress-nginx-admission-patch-cttr7
	dbd2ac7f28dad       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   284f8d69b8952       ingress-nginx-admission-create-4pnj5
	7e40f8919b712       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   b94132b247af7       storage-provisioner
	cbf5948816775       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   b9b292da90ba5       coredns-66bff467f8-6prs4
	135d27d98ae9f       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   8c88b0585a476       kindnet-mblns
	99c225f162f6b       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   75ce07f245623       kube-proxy-8sw6v
	7e8f7492331e1       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   27ad71fe8b6aa       kube-apiserver-ingress-addon-legacy-341255
	8270d11e59381       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   58cc8e7bb685a       kube-controller-manager-ingress-addon-legacy-341255
	5134be9f27139       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   c204e8e490a2e       kube-scheduler-ingress-addon-legacy-341255
	2ec8d9d2bce95       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   71eff6cc5a23c       etcd-ingress-addon-legacy-341255
	
	
	==> coredns [cbf5948816775be151c66ea94c04a1038a06af5facc4c3709ba1ce4fa27457c8] <==
	[INFO] 10.244.0.5:33859 - 17549 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00380162s
	[INFO] 10.244.0.5:47875 - 1603 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002823935s
	[INFO] 10.244.0.5:56038 - 22993 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002943344s
	[INFO] 10.244.0.5:33859 - 65419 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002710276s
	[INFO] 10.244.0.5:58741 - 6634 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00316129s
	[INFO] 10.244.0.5:60366 - 21869 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002667088s
	[INFO] 10.244.0.5:56148 - 52109 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003091588s
	[INFO] 10.244.0.5:46182 - 36223 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002921059s
	[INFO] 10.244.0.5:41148 - 36311 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002758241s
	[INFO] 10.244.0.5:41148 - 51963 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003041763s
	[INFO] 10.244.0.5:60366 - 20644 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003387449s
	[INFO] 10.244.0.5:56148 - 57459 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003207696s
	[INFO] 10.244.0.5:33859 - 60519 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003132142s
	[INFO] 10.244.0.5:56038 - 57435 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003470861s
	[INFO] 10.244.0.5:58741 - 1163 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003320199s
	[INFO] 10.244.0.5:41148 - 26978 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000043417s
	[INFO] 10.244.0.5:46182 - 50376 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00347024s
	[INFO] 10.244.0.5:60366 - 44340 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000160377s
	[INFO] 10.244.0.5:47875 - 22117 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003784848s
	[INFO] 10.244.0.5:46182 - 32235 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069423s
	[INFO] 10.244.0.5:58741 - 28610 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000249777s
	[INFO] 10.244.0.5:33859 - 12151 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000275579s
	[INFO] 10.244.0.5:56038 - 37614 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000370875s
	[INFO] 10.244.0.5:56148 - 137 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000290704s
	[INFO] 10.244.0.5:47875 - 46347 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088398s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-341255
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-341255
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=ingress-addon-legacy-341255
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_21T18_15_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:15:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-341255
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:18:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:16:36 +0000   Thu, 21 Dec 2023 18:15:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:16:36 +0000   Thu, 21 Dec 2023 18:15:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:16:36 +0000   Thu, 21 Dec 2023 18:15:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:16:36 +0000   Thu, 21 Dec 2023 18:15:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-341255
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 350704640a484244beff2d44cfbed34f
	  System UUID:                539e3302-ac59-4e6d-96a3-50273335f8c0
	  Boot ID:                    d99d8f8f-1497-48b1-8406-284c1d2cae5c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-twn2p                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-6prs4                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-341255                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-mblns                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-341255             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-341255    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-8sw6v                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-341255             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m7s (x6 over 4m7s)  kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x5 over 4m7s)  kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x5 over 4m7s)  kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-341255 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-341255 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.004939] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006565] FS-Cache: N-cookie d=00000000ac11eb8c{9p.inode} n=00000000ae093f63
	[  +0.008723] FS-Cache: N-key=[8] '85a00f0200000000'
	[  +2.570987] FS-Cache: Duplicate cookie detected
	[  +0.004718] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=0000000044b78208{9P.session} n=000000008e746005
	[  +0.007517] FS-Cache: O-key=[10] '34323935373333333334'
	[  +0.005348] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006584] FS-Cache: N-cookie d=0000000044b78208{9P.session} n=00000000cbd55615
	[  +0.008899] FS-Cache: N-key=[10] '34323935373333333334'
	[Dec21 18:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec21 18:16] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +1.028201] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +2.015857] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +4.223691] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +8.191334] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[Dec21 18:17] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[ +34.045433] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	
	
	==> etcd [2ec8d9d2bce95692a78b1578e68cb2570633c6e606ffccb0190d76ec562a1bb1] <==
	raft2023/12/21 18:14:59 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/21 18:14:59 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/21 18:14:59 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/21 18:14:59 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-21 18:14:59.713141 W | auth: simple token is not cryptographically signed
	2023-12-21 18:14:59.718132 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/12/21 18:14:59 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-21 18:14:59.721625 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-21 18:14:59.723462 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-21 18:14:59.723592 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-21 18:14:59.723623 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-21 18:14:59.723632 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/21 18:15:00 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/21 18:15:00 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/21 18:15:00 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/21 18:15:00 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/21 18:15:00 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-21 18:15:00.310022 I | etcdserver: published {Name:ingress-addon-legacy-341255 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-21 18:15:00.310041 I | embed: ready to serve client requests
	2023-12-21 18:15:00.310153 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-21 18:15:00.310195 I | embed: ready to serve client requests
	2023-12-21 18:15:00.310593 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-21 18:15:00.311371 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-21 18:15:00.312613 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-21 18:15:00.313100 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 18:19:05 up  1:01,  0 users,  load average: 0.06, 0.53, 0.45
	Linux ingress-addon-legacy-341255 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [135d27d98ae9ff9643a1758b6dce4a5f1b3adb330f4dcf83c0fccfbcef3f4ad1] <==
	I1221 18:16:57.224138       1 main.go:227] handling current node
	I1221 18:17:07.236722       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:07.236749       1 main.go:227] handling current node
	I1221 18:17:17.240339       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:17.240361       1 main.go:227] handling current node
	I1221 18:17:27.243242       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:27.243265       1 main.go:227] handling current node
	I1221 18:17:37.247857       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:37.247881       1 main.go:227] handling current node
	I1221 18:17:47.259929       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:47.259954       1 main.go:227] handling current node
	I1221 18:17:57.263479       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:17:57.263506       1 main.go:227] handling current node
	I1221 18:18:07.275159       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:07.275200       1 main.go:227] handling current node
	I1221 18:18:17.278495       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:17.278518       1 main.go:227] handling current node
	I1221 18:18:27.282039       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:27.282063       1 main.go:227] handling current node
	I1221 18:18:37.286408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:37.286433       1 main.go:227] handling current node
	I1221 18:18:47.298298       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:47.298321       1 main.go:227] handling current node
	I1221 18:18:57.301960       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1221 18:18:57.301981       1 main.go:227] handling current node
	
	
	==> kube-apiserver [7e8f7492331e15102b107544a8847a28ae9ed174d21b930210ab04cf1ae00712] <==
	E1221 18:15:03.311099       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1221 18:15:03.407886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 18:15:03.408139       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1221 18:15:03.408427       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1221 18:15:03.408602       1 cache.go:39] Caches are synced for autoregister controller
	I1221 18:15:03.409220       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1221 18:15:04.307028       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1221 18:15:04.307052       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1221 18:15:04.311504       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1221 18:15:04.314226       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1221 18:15:04.314246       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1221 18:15:04.556573       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 18:15:04.592416       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1221 18:15:04.713165       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1221 18:15:04.714036       1 controller.go:609] quota admission added evaluator for: endpoints
	I1221 18:15:04.716771       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 18:15:05.655181       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1221 18:15:06.292603       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1221 18:15:06.477585       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1221 18:15:06.651402       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 18:15:21.513453       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1221 18:15:21.679517       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1221 18:15:47.722400       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1221 18:16:17.785703       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1221 18:18:57.453144       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [8270d11e59381a261a08f7155c5cc4d2a557d917b0b421a998ac8bed93d2bc87] <==
	I1221 18:15:21.457895       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1221 18:15:21.507946       1 shared_informer.go:230] Caches are synced for stateful set 
	I1221 18:15:21.508417       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1221 18:15:21.522012       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"c39c7f2e-baee-48a8-9585-d423d57aa398", APIVersion:"apps/v1", ResourceVersion:"240", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-mblns
	I1221 18:15:21.522052       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"97c4dc9c-1e0e-4771-8235-d4c74bcf466d", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-8sw6v
	I1221 18:15:21.616201       1 shared_informer.go:230] Caches are synced for resource quota 
	I1221 18:15:21.616568       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1221 18:15:21.658100       1 shared_informer.go:230] Caches are synced for disruption 
	I1221 18:15:21.658121       1 disruption.go:339] Sending events to api server.
	I1221 18:15:21.660501       1 shared_informer.go:230] Caches are synced for resource quota 
	I1221 18:15:21.677325       1 shared_informer.go:230] Caches are synced for deployment 
	I1221 18:15:21.677947       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1221 18:15:21.677959       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1221 18:15:21.687399       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"6cb0224a-2e7c-4132-a6b7-f4c992581e59", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1221 18:15:21.694252       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"e248b6b9-97a7-40c3-8ef1-dadea071f864", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-6prs4
	I1221 18:15:41.109079       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1221 18:15:47.715879       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6edf4013-2427-4c56-8ac5-d3df0f926dfd", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1221 18:15:47.723601       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"21d506ef-d01d-4eb9-a324-ef8e7d21c0ad", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-qtqvb
	I1221 18:15:47.733018       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"05904087-67b3-4c97-a6ce-dc9c6ba9cb7c", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4pnj5
	I1221 18:15:47.796549       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c38182c6-014d-4a56-a209-a3fae2004c97", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-cttr7
	I1221 18:15:52.803244       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"05904087-67b3-4c97-a6ce-dc9c6ba9cb7c", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1221 18:15:53.804851       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"c38182c6-014d-4a56-a209-a3fae2004c97", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1221 18:18:40.557682       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5d2c748e-a0ec-41ea-80d1-d9b18e8cfb10", APIVersion:"apps/v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1221 18:18:40.562228       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"55e93ba1-5ba5-4540-92a4-33bbfcead94d", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-twn2p
	E1221 18:19:02.201188       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-wfs74" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [99c225f162f6b7ba6cffa474d2126730f1444516cbd91c72794d9f613a16d887] <==
	W1221 18:15:22.339312       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1221 18:15:22.345417       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1221 18:15:22.345501       1 server_others.go:186] Using iptables Proxier.
	I1221 18:15:22.346126       1 server.go:583] Version: v1.18.20
	I1221 18:15:22.346512       1 config.go:133] Starting endpoints config controller
	I1221 18:15:22.346557       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1221 18:15:22.346615       1 config.go:315] Starting service config controller
	I1221 18:15:22.346628       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1221 18:15:22.446733       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1221 18:15:22.446770       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [5134be9f271395ed3eccb54ea6eee5ba04afa5ff7ccf8fc0f758a4d6b4ea26dc] <==
	I1221 18:15:03.401689       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1221 18:15:03.401711       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1221 18:15:03.403363       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 18:15:03.403453       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1221 18:15:03.403795       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1221 18:15:03.403952       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1221 18:15:03.404996       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:15:03.405438       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:15:03.405700       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:15:03.405903       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1221 18:15:03.405910       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1221 18:15:03.406060       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1221 18:15:03.406253       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:15:03.406261       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1221 18:15:03.406387       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1221 18:15:03.406391       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1221 18:15:03.406451       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1221 18:15:03.406868       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1221 18:15:04.276017       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:15:04.315176       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1221 18:15:04.369324       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:15:04.398425       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1221 18:15:04.443691       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1221 18:15:06.603646       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1221 18:15:21.704550       1 factory.go:503] pod: kube-system/coredns-66bff467f8-6prs4 is already present in the active queue
	
	
	==> kubelet <==
	Dec 21 18:18:24 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:24.692499    1844 pod_workers.go:191] Error syncing pod 6aba3a68-7812-4c2d-bf3a-a21fd746a21a ("kube-ingress-dns-minikube_kube-system(6aba3a68-7812-4c2d-bf3a-a21fd746a21a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 21 18:18:35 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:35.692298    1844 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:35 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:35.692344    1844 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:35 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:35.692386    1844 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:35 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:35.692411    1844 pod_workers.go:191] Error syncing pod 6aba3a68-7812-4c2d-bf3a-a21fd746a21a ("kube-ingress-dns-minikube_kube-system(6aba3a68-7812-4c2d-bf3a-a21fd746a21a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 21 18:18:40 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:18:40.567409    1844 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 21 18:18:40 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:18:40.653166    1844 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-qwp9s" (UniqueName: "kubernetes.io/secret/c339be86-748b-410b-8e92-1dd397a8e431-default-token-qwp9s") pod "hello-world-app-5f5d8b66bb-twn2p" (UID: "c339be86-748b-410b-8e92-1dd397a8e431")
	Dec 21 18:18:40 ingress-addon-legacy-341255 kubelet[1844]: W1221 18:18:40.910367    1844 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/1f866a6ce2e51ba81814dede66c5b6e7bb1da4a422b6815b20328563012a4a59/crio-bd59ef884298534690e97ec6dbc8272f67a20dfa0546b1faa8379a093b594efd WatchSource:0}: Error finding container bd59ef884298534690e97ec6dbc8272f67a20dfa0546b1faa8379a093b594efd: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c45980 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 21 18:18:50 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:50.692544    1844 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:50 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:50.692584    1844 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:50 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:50.692641    1844 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 21 18:18:50 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:50.692675    1844 pod_workers.go:191] Error syncing pod 6aba3a68-7812-4c2d-bf3a-a21fd746a21a ("kube-ingress-dns-minikube_kube-system(6aba3a68-7812-4c2d-bf3a-a21fd746a21a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 21 18:18:56 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:18:56.313588    1844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-hzhh4" (UniqueName: "kubernetes.io/secret/6aba3a68-7812-4c2d-bf3a-a21fd746a21a-minikube-ingress-dns-token-hzhh4") pod "6aba3a68-7812-4c2d-bf3a-a21fd746a21a" (UID: "6aba3a68-7812-4c2d-bf3a-a21fd746a21a")
	Dec 21 18:18:56 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:18:56.315383    1844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aba3a68-7812-4c2d-bf3a-a21fd746a21a-minikube-ingress-dns-token-hzhh4" (OuterVolumeSpecName: "minikube-ingress-dns-token-hzhh4") pod "6aba3a68-7812-4c2d-bf3a-a21fd746a21a" (UID: "6aba3a68-7812-4c2d-bf3a-a21fd746a21a"). InnerVolumeSpecName "minikube-ingress-dns-token-hzhh4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:18:56 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:18:56.413863    1844 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-hzhh4" (UniqueName: "kubernetes.io/secret/6aba3a68-7812-4c2d-bf3a-a21fd746a21a-minikube-ingress-dns-token-hzhh4") on node "ingress-addon-legacy-341255" DevicePath ""
	Dec 21 18:18:57 ingress-addon-legacy-341255 kubelet[1844]: W1221 18:18:57.046296    1844 pod_container_deletor.go:77] Container "08333623bbbd86c6b33b0672b2e53382cac2b8bc5e1e1fd51e9eb7ba29a1b80b" not found in pod's containers
	Dec 21 18:18:57 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:57.445221    1844 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qtqvb.17a2ebad04ead864", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qtqvb", UID:"1e2b2789-be94-46b5-910d-0eebfdcb0a8b", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-341255"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1593e445a746e64, ext:231189090651, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1593e445a746e64, ext:231189090651, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qtqvb.17a2ebad04ead864" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 21 18:18:57 ingress-addon-legacy-341255 kubelet[1844]: E1221 18:18:57.449086    1844 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qtqvb.17a2ebad04ead864", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qtqvb", UID:"1e2b2789-be94-46b5-910d-0eebfdcb0a8b", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-341255"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1593e445a746e64, ext:231189090651, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1593e445a991d1e, ext:231191494670, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qtqvb.17a2ebad04ead864" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: W1221 18:19:00.051398    1844 pod_container_deletor.go:77] Container "7d5df0420d6a7fb7250199c347f1d8b76c6b40278d2840affc32e0e0036391e4" not found in pod's containers
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.322171    1844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-webhook-cert") pod "1e2b2789-be94-46b5-910d-0eebfdcb0a8b" (UID: "1e2b2789-be94-46b5-910d-0eebfdcb0a8b")
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.322229    1844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-vd7wj" (UniqueName: "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-ingress-nginx-token-vd7wj") pod "1e2b2789-be94-46b5-910d-0eebfdcb0a8b" (UID: "1e2b2789-be94-46b5-910d-0eebfdcb0a8b")
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.324127    1844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-ingress-nginx-token-vd7wj" (OuterVolumeSpecName: "ingress-nginx-token-vd7wj") pod "1e2b2789-be94-46b5-910d-0eebfdcb0a8b" (UID: "1e2b2789-be94-46b5-910d-0eebfdcb0a8b"). InnerVolumeSpecName "ingress-nginx-token-vd7wj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.324242    1844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1e2b2789-be94-46b5-910d-0eebfdcb0a8b" (UID: "1e2b2789-be94-46b5-910d-0eebfdcb0a8b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.422480    1844 reconciler.go:319] Volume detached for volume "ingress-nginx-token-vd7wj" (UniqueName: "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-ingress-nginx-token-vd7wj") on node "ingress-addon-legacy-341255" DevicePath ""
	Dec 21 18:19:00 ingress-addon-legacy-341255 kubelet[1844]: I1221 18:19:00.422508    1844 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1e2b2789-be94-46b5-910d-0eebfdcb0a8b-webhook-cert") on node "ingress-addon-legacy-341255" DevicePath ""
	
	
	==> storage-provisioner [7e40f8919b712bab1d0501bbb6fbcba7f8178557b5f9d78b11c087959b07d266] <==
	I1221 18:15:41.738206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1221 18:15:41.746205       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1221 18:15:41.746246       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1221 18:15:41.750614       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1221 18:15:41.750719       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-341255_ad044664-6516-4f8a-afbe-295fdfa3769a!
	I1221 18:15:41.750718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7283ab0-3f2b-459c-b6e6-2e3b7807149b", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-341255_ad044664-6516-4f8a-afbe-295fdfa3769a became leader
	I1221 18:15:41.851024       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-341255_ad044664-6516-4f8a-afbe-295fdfa3769a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-341255 -n ingress-addon-legacy-341255
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-341255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- sh -c "ping -c 1 192.168.58.1": exit status 1 (172.573711ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-pvfqq): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- sh -c "ping -c 1 192.168.58.1": exit status 1 (180.106285ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-qq9gx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-186629
helpers_test.go:235: (dbg) docker inspect multinode-186629:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4",
	        "Created": "2023-12-21T18:23:42.293771977Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 103783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:23:42.570058983Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aaeab328720c5f9c5998a41dcf23df3cc1d95a0c58c535e504f0d445f5dfad94",
	        "ResolvConfPath": "/var/lib/docker/containers/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/hosts",
	        "LogPath": "/var/lib/docker/containers/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4-json.log",
	        "Name": "/multinode-186629",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-186629:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-186629",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8e58101118e6f7d224e924a0d6edc85924bd4ed727b0189a8376bcb533e786a-init/diff:/var/lib/docker/overlay2/5f93c210e62b94f4976b2a81580f0bf0da95be40a907596ee84a499ee959f455/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8e58101118e6f7d224e924a0d6edc85924bd4ed727b0189a8376bcb533e786a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8e58101118e6f7d224e924a0d6edc85924bd4ed727b0189a8376bcb533e786a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8e58101118e6f7d224e924a0d6edc85924bd4ed727b0189a8376bcb533e786a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-186629",
	                "Source": "/var/lib/docker/volumes/multinode-186629/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-186629",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-186629",
	                "name.minikube.sigs.k8s.io": "multinode-186629",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2fedb1233aa64ba3653a7232dd453d60982be5f662d5c38959ef25b67dece9d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32849"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d2fedb1233aa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-186629": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cf3b85f473e9",
	                        "multinode-186629"
	                    ],
	                    "NetworkID": "6041e72d4cd85f4299202300c09efdf4fbe050657b0ed5f67c4c2bf2e2a84ccc",
	                    "EndpointID": "84cab52192238fe58b7874b56ccb3ff2c2c6ee0cb58018a982e5353b1339d18d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-186629 -n multinode-186629
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-186629 logs -n 25: (1.123982563s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-054226                           | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-054226 ssh -- ls                    | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-040885                           | mount-start-1-040885 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-054226 ssh -- ls                    | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-054226                           | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	| start   | -p mount-start-2-054226                           | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	| ssh     | mount-start-2-054226 ssh -- ls                    | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-054226                           | mount-start-2-054226 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	| delete  | -p mount-start-1-040885                           | mount-start-1-040885 | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:23 UTC |
	| start   | -p multinode-186629                               | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:23 UTC | 21 Dec 23 18:24 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- apply -f                   | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:24 UTC | 21 Dec 23 18:24 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- rollout                    | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:24 UTC | 21 Dec 23 18:25 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- get pods -o                | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- get pods -o                | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-pvfqq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-qq9gx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-pvfqq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-qq9gx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-pvfqq -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-qq9gx -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- get pods -o                | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-pvfqq                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC |                     |
	|         | busybox-5bc68d56bd-pvfqq -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC | 21 Dec 23 18:25 UTC |
	|         | busybox-5bc68d56bd-qq9gx                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-186629 -- exec                       | multinode-186629     | jenkins | v1.32.0 | 21 Dec 23 18:25 UTC |                     |
	|         | busybox-5bc68d56bd-qq9gx -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:23:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:23:36.486687  103175 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:23:36.486782  103175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:23:36.486789  103175 out.go:309] Setting ErrFile to fd 2...
	I1221 18:23:36.486794  103175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:23:36.487019  103175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:23:36.487544  103175 out.go:303] Setting JSON to false
	I1221 18:23:36.488824  103175 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3964,"bootTime":1703179053,"procs":800,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:23:36.488886  103175 start.go:138] virtualization: kvm guest
	I1221 18:23:36.490957  103175 out.go:177] * [multinode-186629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:23:36.492417  103175 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:23:36.492413  103175 notify.go:220] Checking for updates...
	I1221 18:23:36.493994  103175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:23:36.495457  103175 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:23:36.496967  103175 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:23:36.498347  103175 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:23:36.499694  103175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:23:36.501149  103175 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:23:36.521718  103175 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:23:36.521828  103175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:23:36.571014  103175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-21 18:23:36.563075909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:23:36.571101  103175 docker.go:295] overlay module found
	I1221 18:23:36.574214  103175 out.go:177] * Using the docker driver based on user configuration
	I1221 18:23:36.575544  103175 start.go:298] selected driver: docker
	I1221 18:23:36.575557  103175 start.go:902] validating driver "docker" against <nil>
	I1221 18:23:36.575566  103175 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:23:36.576304  103175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:23:36.624856  103175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-21 18:23:36.616818683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:23:36.625002  103175 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:23:36.625207  103175 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1221 18:23:36.627083  103175 out.go:177] * Using Docker driver with root privileges
	I1221 18:23:36.628693  103175 cni.go:84] Creating CNI manager for ""
	I1221 18:23:36.628709  103175 cni.go:136] 0 nodes found, recommending kindnet
	I1221 18:23:36.628719  103175 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 18:23:36.628738  103175 start_flags.go:323] config:
	{Name:multinode-186629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:23:36.631050  103175 out.go:177] * Starting control plane node multinode-186629 in cluster multinode-186629
	I1221 18:23:36.632957  103175 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:23:36.634275  103175 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:23:36.635473  103175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:23:36.635495  103175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:23:36.635513  103175 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1221 18:23:36.635526  103175 cache.go:56] Caching tarball of preloaded images
	I1221 18:23:36.635634  103175 preload.go:174] Found /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 18:23:36.635647  103175 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1221 18:23:36.636052  103175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json ...
	I1221 18:23:36.636084  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json: {Name:mk2c06ab7ec33960bd0204b30335ff41dd5eb331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:36.651350  103175 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:23:36.651368  103175 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1221 18:23:36.651384  103175 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:23:36.651411  103175 start.go:365] acquiring machines lock for multinode-186629: {Name:mk692a42087a27ed6fcc09ef3f1d3f7ee0ec5d85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:23:36.651485  103175 start.go:369] acquired machines lock for "multinode-186629" in 60.007µs
	I1221 18:23:36.651504  103175 start.go:93] Provisioning new machine with config: &{Name:multinode-186629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:23:36.651565  103175 start.go:125] createHost starting for "" (driver="docker")
	I1221 18:23:36.653259  103175 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1221 18:23:36.653448  103175 start.go:159] libmachine.API.Create for "multinode-186629" (driver="docker")
	I1221 18:23:36.653475  103175 client.go:168] LocalClient.Create starting
	I1221 18:23:36.653543  103175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem
	I1221 18:23:36.653570  103175 main.go:141] libmachine: Decoding PEM data...
	I1221 18:23:36.653585  103175 main.go:141] libmachine: Parsing certificate...
	I1221 18:23:36.653637  103175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem
	I1221 18:23:36.653660  103175 main.go:141] libmachine: Decoding PEM data...
	I1221 18:23:36.653669  103175 main.go:141] libmachine: Parsing certificate...
	I1221 18:23:36.653934  103175 cli_runner.go:164] Run: docker network inspect multinode-186629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1221 18:23:36.668538  103175 cli_runner.go:211] docker network inspect multinode-186629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1221 18:23:36.668602  103175 network_create.go:281] running [docker network inspect multinode-186629] to gather additional debugging logs...
	I1221 18:23:36.668621  103175 cli_runner.go:164] Run: docker network inspect multinode-186629
	W1221 18:23:36.683123  103175 cli_runner.go:211] docker network inspect multinode-186629 returned with exit code 1
	I1221 18:23:36.683152  103175 network_create.go:284] error running [docker network inspect multinode-186629]: docker network inspect multinode-186629: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-186629 not found
	I1221 18:23:36.683166  103175 network_create.go:286] output of [docker network inspect multinode-186629]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-186629 not found
	
	** /stderr **
	I1221 18:23:36.683279  103175 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:23:36.698235  103175 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-91ba53cad885 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c0:8e:e6:08} reservation:<nil>}
	I1221 18:23:36.698669  103175 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021d9460}
	I1221 18:23:36.698688  103175 network_create.go:124] attempt to create docker network multinode-186629 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1221 18:23:36.698725  103175 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-186629 multinode-186629
	I1221 18:23:36.747114  103175 network_create.go:108] docker network multinode-186629 192.168.58.0/24 created
	I1221 18:23:36.747147  103175 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-186629" container
	I1221 18:23:36.747219  103175 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:23:36.761788  103175 cli_runner.go:164] Run: docker volume create multinode-186629 --label name.minikube.sigs.k8s.io=multinode-186629 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:23:36.777439  103175 oci.go:103] Successfully created a docker volume multinode-186629
	I1221 18:23:36.777509  103175 cli_runner.go:164] Run: docker run --rm --name multinode-186629-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-186629 --entrypoint /usr/bin/test -v multinode-186629:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:23:37.247112  103175 oci.go:107] Successfully prepared a docker volume multinode-186629
	I1221 18:23:37.247150  103175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:23:37.247173  103175 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:23:37.247240  103175 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-186629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:23:42.224785  103175 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-186629:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.977489337s)
	I1221 18:23:42.224820  103175 kic.go:203] duration metric: took 4.977646 seconds to extract preloaded images to volume
	W1221 18:23:42.224939  103175 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:23:42.225028  103175 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:23:42.278075  103175 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-186629 --name multinode-186629 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-186629 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-186629 --network multinode-186629 --ip 192.168.58.2 --volume multinode-186629:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:23:42.577649  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Running}}
	I1221 18:23:42.594586  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:23:42.612161  103175 cli_runner.go:164] Run: docker exec multinode-186629 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:23:42.682763  103175 oci.go:144] the created container "multinode-186629" has a running status.
	I1221 18:23:42.682806  103175 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa...
	I1221 18:23:43.261579  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1221 18:23:43.261633  103175 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:23:43.281990  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:23:43.299503  103175 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:23:43.299522  103175 kic_runner.go:114] Args: [docker exec --privileged multinode-186629 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:23:43.361835  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:23:43.377091  103175 machine.go:88] provisioning docker machine ...
	I1221 18:23:43.377128  103175 ubuntu.go:169] provisioning hostname "multinode-186629"
	I1221 18:23:43.377183  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:43.391783  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:23:43.392140  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1221 18:23:43.392157  103175 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-186629 && echo "multinode-186629" | sudo tee /etc/hostname
	I1221 18:23:43.515109  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-186629
	
	I1221 18:23:43.515186  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:43.531838  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:23:43.532145  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1221 18:23:43.532161  103175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-186629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-186629/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-186629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:23:43.645008  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:23:43.645041  103175 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:23:43.645071  103175 ubuntu.go:177] setting up certificates
	I1221 18:23:43.645080  103175 provision.go:83] configureAuth start
	I1221 18:23:43.645128  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629
	I1221 18:23:43.661171  103175 provision.go:138] copyHostCerts
	I1221 18:23:43.661212  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:23:43.661258  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem, removing ...
	I1221 18:23:43.661271  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:23:43.661329  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:23:43.661396  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:23:43.661416  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem, removing ...
	I1221 18:23:43.661422  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:23:43.661447  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:23:43.661484  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:23:43.661499  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem, removing ...
	I1221 18:23:43.661505  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:23:43.661524  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:23:43.661563  103175 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.multinode-186629 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-186629]
	I1221 18:23:43.705276  103175 provision.go:172] copyRemoteCerts
	I1221 18:23:43.705356  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:23:43.705405  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:43.721225  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:23:43.804946  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1221 18:23:43.805002  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:23:43.824932  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1221 18:23:43.824986  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1221 18:23:43.844316  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1221 18:23:43.844373  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:23:43.863907  103175 provision.go:86] duration metric: configureAuth took 218.815851ms
	I1221 18:23:43.863934  103175 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:23:43.864132  103175 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:23:43.864244  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:43.880290  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:23:43.880660  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32849 <nil> <nil>}
	I1221 18:23:43.880679  103175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:23:44.073173  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:23:44.073201  103175 machine.go:91] provisioned docker machine in 696.082564ms
	I1221 18:23:44.073212  103175 client.go:171] LocalClient.Create took 7.419729342s
	I1221 18:23:44.073246  103175 start.go:167] duration metric: libmachine.API.Create for "multinode-186629" took 7.419796657s
	I1221 18:23:44.073257  103175 start.go:300] post-start starting for "multinode-186629" (driver="docker")
	I1221 18:23:44.073268  103175 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:23:44.073317  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:23:44.073361  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:44.089079  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:23:44.173706  103175 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:23:44.176309  103175 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1221 18:23:44.176324  103175 command_runner.go:130] > NAME="Ubuntu"
	I1221 18:23:44.176329  103175 command_runner.go:130] > VERSION_ID="22.04"
	I1221 18:23:44.176342  103175 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1221 18:23:44.176347  103175 command_runner.go:130] > VERSION_CODENAME=jammy
	I1221 18:23:44.176352  103175 command_runner.go:130] > ID=ubuntu
	I1221 18:23:44.176358  103175 command_runner.go:130] > ID_LIKE=debian
	I1221 18:23:44.176370  103175 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1221 18:23:44.176381  103175 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1221 18:23:44.176395  103175 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1221 18:23:44.176405  103175 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1221 18:23:44.176411  103175 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1221 18:23:44.176461  103175 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:23:44.176498  103175 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:23:44.176517  103175 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:23:44.176534  103175 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:23:44.176544  103175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:23:44.176602  103175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:23:44.176701  103175 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> 166642.pem in /etc/ssl/certs
	I1221 18:23:44.176717  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /etc/ssl/certs/166642.pem
	I1221 18:23:44.176823  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:23:44.184070  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:23:44.204129  103175 start.go:303] post-start completed in 130.861629ms
	I1221 18:23:44.204418  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629
	I1221 18:23:44.221162  103175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json ...
	I1221 18:23:44.221400  103175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:23:44.221446  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:44.236723  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:23:44.321188  103175 command_runner.go:130] > 26%!
	(MISSING)I1221 18:23:44.321503  103175 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:23:44.325300  103175 command_runner.go:130] > 218G
	I1221 18:23:44.325334  103175 start.go:128] duration metric: createHost completed in 7.673756967s
	I1221 18:23:44.325347  103175 start.go:83] releasing machines lock for "multinode-186629", held for 7.673850016s
	I1221 18:23:44.325397  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629
	I1221 18:23:44.341427  103175 ssh_runner.go:195] Run: cat /version.json
	I1221 18:23:44.341475  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:44.341518  103175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:23:44.341577  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:23:44.358051  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:23:44.358512  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:23:44.516791  103175 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1221 18:23:44.518878  103175 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702920864-17822", "minikube_version": "v1.32.0", "commit": "ef0b5630ad6ebb50e754541e2a9ebe20f96d24a4"}
	I1221 18:23:44.519007  103175 ssh_runner.go:195] Run: systemctl --version
	I1221 18:23:44.522823  103175 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1221 18:23:44.522858  103175 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1221 18:23:44.522913  103175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:23:44.657592  103175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:23:44.661529  103175 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1221 18:23:44.661546  103175 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1221 18:23:44.661552  103175 command_runner.go:130] > Device: 37h/55d	Inode: 577309      Links: 1
	I1221 18:23:44.661559  103175 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:23:44.661565  103175 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1221 18:23:44.661570  103175 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1221 18:23:44.661575  103175 command_runner.go:130] > Change: 2023-12-21 18:04:50.542282371 +0000
	I1221 18:23:44.661580  103175 command_runner.go:130] >  Birth: 2023-12-21 18:04:50.542282371 +0000
	I1221 18:23:44.661632  103175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:23:44.677943  103175 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:23:44.678015  103175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:23:44.702620  103175 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1221 18:23:44.702677  103175 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:23:44.702690  103175 start.go:475] detecting cgroup driver to use...
	I1221 18:23:44.702726  103175 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:23:44.702774  103175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:23:44.715279  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:23:44.724182  103175 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:23:44.724242  103175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:23:44.735591  103175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:23:44.746984  103175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 18:23:44.822585  103175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:23:44.902063  103175 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1221 18:23:44.902103  103175 docker.go:219] disabling docker service ...
	I1221 18:23:44.902152  103175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:23:44.918080  103175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:23:44.927624  103175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:23:45.001649  103175 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1221 18:23:45.001712  103175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:23:45.078801  103175 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1221 18:23:45.078878  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:23:45.088872  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:23:45.101942  103175 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1221 18:23:45.102733  103175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1221 18:23:45.102789  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:23:45.111073  103175 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 18:23:45.111131  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:23:45.119530  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:23:45.127695  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:23:45.136094  103175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:23:45.143712  103175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:23:45.150655  103175 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1221 18:23:45.150717  103175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:23:45.157830  103175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:23:45.229493  103175 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 18:23:45.327919  103175 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 18:23:45.327988  103175 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 18:23:45.331068  103175 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1221 18:23:45.331092  103175 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1221 18:23:45.331107  103175 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1221 18:23:45.331114  103175 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:23:45.331119  103175 command_runner.go:130] > Access: 2023-12-21 18:23:45.315079311 +0000
	I1221 18:23:45.331131  103175 command_runner.go:130] > Modify: 2023-12-21 18:23:45.315079311 +0000
	I1221 18:23:45.331138  103175 command_runner.go:130] > Change: 2023-12-21 18:23:45.315079311 +0000
	I1221 18:23:45.331142  103175 command_runner.go:130] >  Birth: -
	I1221 18:23:45.331162  103175 start.go:543] Will wait 60s for crictl version
	I1221 18:23:45.331201  103175 ssh_runner.go:195] Run: which crictl
	I1221 18:23:45.333933  103175 command_runner.go:130] > /usr/bin/crictl
	I1221 18:23:45.333996  103175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:23:45.363856  103175 command_runner.go:130] > Version:  0.1.0
	I1221 18:23:45.363876  103175 command_runner.go:130] > RuntimeName:  cri-o
	I1221 18:23:45.363881  103175 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1221 18:23:45.363886  103175 command_runner.go:130] > RuntimeApiVersion:  v1
	I1221 18:23:45.363904  103175 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1221 18:23:45.363973  103175 ssh_runner.go:195] Run: crio --version
	I1221 18:23:45.394927  103175 command_runner.go:130] > crio version 1.24.6
	I1221 18:23:45.394952  103175 command_runner.go:130] > Version:          1.24.6
	I1221 18:23:45.394967  103175 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1221 18:23:45.394975  103175 command_runner.go:130] > GitTreeState:     clean
	I1221 18:23:45.394982  103175 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1221 18:23:45.394987  103175 command_runner.go:130] > GoVersion:        go1.18.2
	I1221 18:23:45.394992  103175 command_runner.go:130] > Compiler:         gc
	I1221 18:23:45.394996  103175 command_runner.go:130] > Platform:         linux/amd64
	I1221 18:23:45.395004  103175 command_runner.go:130] > Linkmode:         dynamic
	I1221 18:23:45.395012  103175 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1221 18:23:45.395020  103175 command_runner.go:130] > SeccompEnabled:   true
	I1221 18:23:45.395024  103175 command_runner.go:130] > AppArmorEnabled:  false
	I1221 18:23:45.396590  103175 ssh_runner.go:195] Run: crio --version
	I1221 18:23:45.428958  103175 command_runner.go:130] > crio version 1.24.6
	I1221 18:23:45.428977  103175 command_runner.go:130] > Version:          1.24.6
	I1221 18:23:45.428983  103175 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1221 18:23:45.428987  103175 command_runner.go:130] > GitTreeState:     clean
	I1221 18:23:45.428993  103175 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1221 18:23:45.428997  103175 command_runner.go:130] > GoVersion:        go1.18.2
	I1221 18:23:45.429001  103175 command_runner.go:130] > Compiler:         gc
	I1221 18:23:45.429012  103175 command_runner.go:130] > Platform:         linux/amd64
	I1221 18:23:45.429017  103175 command_runner.go:130] > Linkmode:         dynamic
	I1221 18:23:45.429023  103175 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1221 18:23:45.429028  103175 command_runner.go:130] > SeccompEnabled:   true
	I1221 18:23:45.429032  103175 command_runner.go:130] > AppArmorEnabled:  false
	I1221 18:23:45.431124  103175 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1221 18:23:45.432683  103175 cli_runner.go:164] Run: docker network inspect multinode-186629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:23:45.448227  103175 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1221 18:23:45.451466  103175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:23:45.460861  103175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:23:45.460908  103175 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:23:45.509207  103175 command_runner.go:130] > {
	I1221 18:23:45.509238  103175 command_runner.go:130] >   "images": [
	I1221 18:23:45.509246  103175 command_runner.go:130] >     {
	I1221 18:23:45.509259  103175 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1221 18:23:45.509266  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.509275  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1221 18:23:45.509281  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509289  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.509304  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1221 18:23:45.509315  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1221 18:23:45.509324  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509331  103175 command_runner.go:130] >       "size": "65258016",
	I1221 18:23:45.509336  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.509342  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.509355  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.509365  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.509374  103175 command_runner.go:130] >     },
	I1221 18:23:45.509383  103175 command_runner.go:130] >     {
	I1221 18:23:45.509397  103175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1221 18:23:45.509408  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.509418  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1221 18:23:45.509424  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509429  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.509442  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1221 18:23:45.509459  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1221 18:23:45.509468  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509483  103175 command_runner.go:130] >       "size": "31470524",
	I1221 18:23:45.509493  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.509507  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.509515  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.509522  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.509528  103175 command_runner.go:130] >     },
	I1221 18:23:45.509538  103175 command_runner.go:130] >     {
	I1221 18:23:45.509552  103175 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1221 18:23:45.509562  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.509574  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1221 18:23:45.509583  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509591  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.509605  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1221 18:23:45.509621  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1221 18:23:45.509631  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509639  103175 command_runner.go:130] >       "size": "53621675",
	I1221 18:23:45.509649  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.509659  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.509669  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.509679  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.509694  103175 command_runner.go:130] >     },
	I1221 18:23:45.509701  103175 command_runner.go:130] >     {
	I1221 18:23:45.509708  103175 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1221 18:23:45.509718  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.509730  103175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1221 18:23:45.509740  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509750  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.509765  103175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1221 18:23:45.509780  103175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1221 18:23:45.509796  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509804  103175 command_runner.go:130] >       "size": "295456551",
	I1221 18:23:45.509814  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.509824  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.509830  103175 command_runner.go:130] >       },
	I1221 18:23:45.509841  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.509851  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.509861  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.509870  103175 command_runner.go:130] >     },
	I1221 18:23:45.509882  103175 command_runner.go:130] >     {
	I1221 18:23:45.509893  103175 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1221 18:23:45.509899  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.509908  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1221 18:23:45.509918  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509929  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.509944  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1221 18:23:45.509960  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1221 18:23:45.509969  103175 command_runner.go:130] >       ],
	I1221 18:23:45.509979  103175 command_runner.go:130] >       "size": "127226832",
	I1221 18:23:45.509986  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.509993  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.510002  103175 command_runner.go:130] >       },
	I1221 18:23:45.510013  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.510021  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.510031  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.510039  103175 command_runner.go:130] >     },
	I1221 18:23:45.510048  103175 command_runner.go:130] >     {
	I1221 18:23:45.510065  103175 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1221 18:23:45.510075  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.510084  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1221 18:23:45.510093  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510108  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.510125  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1221 18:23:45.510141  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1221 18:23:45.510150  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510158  103175 command_runner.go:130] >       "size": "123261750",
	I1221 18:23:45.510168  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.510176  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.510184  103175 command_runner.go:130] >       },
	I1221 18:23:45.510192  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.510202  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.510209  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.510218  103175 command_runner.go:130] >     },
	I1221 18:23:45.510227  103175 command_runner.go:130] >     {
	I1221 18:23:45.510241  103175 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1221 18:23:45.510255  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.510266  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1221 18:23:45.510275  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510285  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.510301  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1221 18:23:45.510317  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1221 18:23:45.510326  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510337  103175 command_runner.go:130] >       "size": "74749335",
	I1221 18:23:45.510346  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.510355  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.510363  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.510369  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.510379  103175 command_runner.go:130] >     },
	I1221 18:23:45.510388  103175 command_runner.go:130] >     {
	I1221 18:23:45.510402  103175 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1221 18:23:45.510412  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.510424  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1221 18:23:45.510433  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510444  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.510473  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1221 18:23:45.510492  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1221 18:23:45.510498  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510508  103175 command_runner.go:130] >       "size": "61551410",
	I1221 18:23:45.510518  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.510528  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.510537  103175 command_runner.go:130] >       },
	I1221 18:23:45.510547  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.510557  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.510566  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.510574  103175 command_runner.go:130] >     },
	I1221 18:23:45.510586  103175 command_runner.go:130] >     {
	I1221 18:23:45.510601  103175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1221 18:23:45.510611  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.510619  103175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1221 18:23:45.510628  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510636  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.510651  103175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1221 18:23:45.510665  103175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1221 18:23:45.510675  103175 command_runner.go:130] >       ],
	I1221 18:23:45.510682  103175 command_runner.go:130] >       "size": "750414",
	I1221 18:23:45.510693  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.510703  103175 command_runner.go:130] >         "value": "65535"
	I1221 18:23:45.510712  103175 command_runner.go:130] >       },
	I1221 18:23:45.510722  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.510731  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.510741  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.510748  103175 command_runner.go:130] >     }
	I1221 18:23:45.510752  103175 command_runner.go:130] >   ]
	I1221 18:23:45.510761  103175 command_runner.go:130] > }
	I1221 18:23:45.511753  103175 crio.go:496] all images are preloaded for cri-o runtime.
	I1221 18:23:45.511770  103175 crio.go:415] Images already preloaded, skipping extraction
	I1221 18:23:45.511810  103175 ssh_runner.go:195] Run: sudo crictl images --output json
	I1221 18:23:45.543143  103175 command_runner.go:130] > {
	I1221 18:23:45.543169  103175 command_runner.go:130] >   "images": [
	I1221 18:23:45.543173  103175 command_runner.go:130] >     {
	I1221 18:23:45.543181  103175 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1221 18:23:45.543186  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543191  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1221 18:23:45.543195  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543199  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543208  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1221 18:23:45.543215  103175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1221 18:23:45.543222  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543234  103175 command_runner.go:130] >       "size": "65258016",
	I1221 18:23:45.543252  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.543256  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543261  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543267  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543271  103175 command_runner.go:130] >     },
	I1221 18:23:45.543277  103175 command_runner.go:130] >     {
	I1221 18:23:45.543283  103175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1221 18:23:45.543293  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543299  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1221 18:23:45.543302  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543306  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543313  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1221 18:23:45.543321  103175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1221 18:23:45.543324  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543331  103175 command_runner.go:130] >       "size": "31470524",
	I1221 18:23:45.543334  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.543338  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543342  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543346  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543350  103175 command_runner.go:130] >     },
	I1221 18:23:45.543356  103175 command_runner.go:130] >     {
	I1221 18:23:45.543362  103175 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1221 18:23:45.543369  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543374  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1221 18:23:45.543380  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543386  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543402  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1221 18:23:45.543412  103175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1221 18:23:45.543419  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543423  103175 command_runner.go:130] >       "size": "53621675",
	I1221 18:23:45.543430  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.543434  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543440  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543445  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543451  103175 command_runner.go:130] >     },
	I1221 18:23:45.543455  103175 command_runner.go:130] >     {
	I1221 18:23:45.543463  103175 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1221 18:23:45.543469  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543474  103175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1221 18:23:45.543480  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543485  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543491  103175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1221 18:23:45.543500  103175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1221 18:23:45.543515  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543522  103175 command_runner.go:130] >       "size": "295456551",
	I1221 18:23:45.543526  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.543532  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.543536  103175 command_runner.go:130] >       },
	I1221 18:23:45.543543  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543547  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543553  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543557  103175 command_runner.go:130] >     },
	I1221 18:23:45.543562  103175 command_runner.go:130] >     {
	I1221 18:23:45.543569  103175 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1221 18:23:45.543575  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543580  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1221 18:23:45.543586  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543591  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543601  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1221 18:23:45.543611  103175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1221 18:23:45.543616  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543623  103175 command_runner.go:130] >       "size": "127226832",
	I1221 18:23:45.543630  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.543634  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.543640  103175 command_runner.go:130] >       },
	I1221 18:23:45.543645  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543651  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543655  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543659  103175 command_runner.go:130] >     },
	I1221 18:23:45.543665  103175 command_runner.go:130] >     {
	I1221 18:23:45.543671  103175 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1221 18:23:45.543677  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543683  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1221 18:23:45.543689  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543694  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543705  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1221 18:23:45.543715  103175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1221 18:23:45.543721  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543726  103175 command_runner.go:130] >       "size": "123261750",
	I1221 18:23:45.543734  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.543741  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.543745  103175 command_runner.go:130] >       },
	I1221 18:23:45.543751  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543755  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543762  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543766  103175 command_runner.go:130] >     },
	I1221 18:23:45.543772  103175 command_runner.go:130] >     {
	I1221 18:23:45.543778  103175 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1221 18:23:45.543784  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543790  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1221 18:23:45.543796  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543800  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543809  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1221 18:23:45.543819  103175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1221 18:23:45.543822  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543829  103175 command_runner.go:130] >       "size": "74749335",
	I1221 18:23:45.543833  103175 command_runner.go:130] >       "uid": null,
	I1221 18:23:45.543842  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543848  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543852  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543858  103175 command_runner.go:130] >     },
	I1221 18:23:45.543862  103175 command_runner.go:130] >     {
	I1221 18:23:45.543871  103175 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1221 18:23:45.543877  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.543882  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1221 18:23:45.543888  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543892  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.543917  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1221 18:23:45.543933  103175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1221 18:23:45.543936  103175 command_runner.go:130] >       ],
	I1221 18:23:45.543941  103175 command_runner.go:130] >       "size": "61551410",
	I1221 18:23:45.543947  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.543951  103175 command_runner.go:130] >         "value": "0"
	I1221 18:23:45.543957  103175 command_runner.go:130] >       },
	I1221 18:23:45.543962  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.543971  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.543977  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.543981  103175 command_runner.go:130] >     },
	I1221 18:23:45.543987  103175 command_runner.go:130] >     {
	I1221 18:23:45.543993  103175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1221 18:23:45.544000  103175 command_runner.go:130] >       "repoTags": [
	I1221 18:23:45.544005  103175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1221 18:23:45.544011  103175 command_runner.go:130] >       ],
	I1221 18:23:45.544015  103175 command_runner.go:130] >       "repoDigests": [
	I1221 18:23:45.544027  103175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1221 18:23:45.544035  103175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1221 18:23:45.544041  103175 command_runner.go:130] >       ],
	I1221 18:23:45.544048  103175 command_runner.go:130] >       "size": "750414",
	I1221 18:23:45.544055  103175 command_runner.go:130] >       "uid": {
	I1221 18:23:45.544059  103175 command_runner.go:130] >         "value": "65535"
	I1221 18:23:45.544065  103175 command_runner.go:130] >       },
	I1221 18:23:45.544069  103175 command_runner.go:130] >       "username": "",
	I1221 18:23:45.544075  103175 command_runner.go:130] >       "spec": null,
	I1221 18:23:45.544084  103175 command_runner.go:130] >       "pinned": false
	I1221 18:23:45.544090  103175 command_runner.go:130] >     }
	I1221 18:23:45.544094  103175 command_runner.go:130] >   ]
	I1221 18:23:45.544099  103175 command_runner.go:130] > }
	I1221 18:23:45.544194  103175 crio.go:496] all images are preloaded for cri-o runtime.
	I1221 18:23:45.544204  103175 cache_images.go:84] Images are preloaded, skipping loading
	I1221 18:23:45.544252  103175 ssh_runner.go:195] Run: crio config
	I1221 18:23:45.578770  103175 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1221 18:23:45.578800  103175 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1221 18:23:45.578810  103175 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1221 18:23:45.578816  103175 command_runner.go:130] > #
	I1221 18:23:45.578830  103175 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1221 18:23:45.578840  103175 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1221 18:23:45.578857  103175 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1221 18:23:45.578878  103175 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1221 18:23:45.578888  103175 command_runner.go:130] > # reload'.
	I1221 18:23:45.578898  103175 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1221 18:23:45.578917  103175 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1221 18:23:45.578935  103175 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1221 18:23:45.578948  103175 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1221 18:23:45.578954  103175 command_runner.go:130] > [crio]
	I1221 18:23:45.578963  103175 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1221 18:23:45.578974  103175 command_runner.go:130] > # containers images, in this directory.
	I1221 18:23:45.578991  103175 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1221 18:23:45.579004  103175 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1221 18:23:45.579016  103175 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1221 18:23:45.579026  103175 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1221 18:23:45.579036  103175 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1221 18:23:45.579043  103175 command_runner.go:130] > # storage_driver = "vfs"
	I1221 18:23:45.579053  103175 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1221 18:23:45.579062  103175 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1221 18:23:45.579069  103175 command_runner.go:130] > # storage_option = [
	I1221 18:23:45.579075  103175 command_runner.go:130] > # ]
	I1221 18:23:45.579085  103175 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1221 18:23:45.579095  103175 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1221 18:23:45.579106  103175 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1221 18:23:45.579115  103175 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1221 18:23:45.579129  103175 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1221 18:23:45.579136  103175 command_runner.go:130] > # always happen on a node reboot
	I1221 18:23:45.579147  103175 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1221 18:23:45.579157  103175 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1221 18:23:45.579171  103175 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1221 18:23:45.579195  103175 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1221 18:23:45.579207  103175 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1221 18:23:45.579220  103175 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1221 18:23:45.579236  103175 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1221 18:23:45.579246  103175 command_runner.go:130] > # internal_wipe = true
	I1221 18:23:45.579258  103175 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1221 18:23:45.579271  103175 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1221 18:23:45.579284  103175 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1221 18:23:45.579296  103175 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1221 18:23:45.579313  103175 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1221 18:23:45.579323  103175 command_runner.go:130] > [crio.api]
	I1221 18:23:45.579339  103175 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1221 18:23:45.579350  103175 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1221 18:23:45.579361  103175 command_runner.go:130] > # IP address on which the stream server will listen.
	I1221 18:23:45.579372  103175 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1221 18:23:45.579387  103175 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1221 18:23:45.579400  103175 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1221 18:23:45.579408  103175 command_runner.go:130] > # stream_port = "0"
	I1221 18:23:45.579419  103175 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1221 18:23:45.579429  103175 command_runner.go:130] > # stream_enable_tls = false
	I1221 18:23:45.579440  103175 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1221 18:23:45.579450  103175 command_runner.go:130] > # stream_idle_timeout = ""
	I1221 18:23:45.579460  103175 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1221 18:23:45.579469  103175 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1221 18:23:45.579473  103175 command_runner.go:130] > # minutes.
	I1221 18:23:45.579478  103175 command_runner.go:130] > # stream_tls_cert = ""
	I1221 18:23:45.579491  103175 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1221 18:23:45.579505  103175 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1221 18:23:45.579512  103175 command_runner.go:130] > # stream_tls_key = ""
	I1221 18:23:45.579529  103175 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1221 18:23:45.579543  103175 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1221 18:23:45.579556  103175 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1221 18:23:45.579565  103175 command_runner.go:130] > # stream_tls_ca = ""
	I1221 18:23:45.579578  103175 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1221 18:23:45.579590  103175 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1221 18:23:45.579610  103175 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1221 18:23:45.579626  103175 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1221 18:23:45.579657  103175 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1221 18:23:45.579671  103175 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1221 18:23:45.579677  103175 command_runner.go:130] > [crio.runtime]
	I1221 18:23:45.579687  103175 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1221 18:23:45.579700  103175 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1221 18:23:45.579710  103175 command_runner.go:130] > # "nofile=1024:2048"
	I1221 18:23:45.579724  103175 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1221 18:23:45.579733  103175 command_runner.go:130] > # default_ulimits = [
	I1221 18:23:45.579737  103175 command_runner.go:130] > # ]
	I1221 18:23:45.579749  103175 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1221 18:23:45.579766  103175 command_runner.go:130] > # no_pivot = false
	I1221 18:23:45.579776  103175 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1221 18:23:45.579790  103175 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1221 18:23:45.579801  103175 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1221 18:23:45.579815  103175 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1221 18:23:45.579824  103175 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1221 18:23:45.579836  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1221 18:23:45.579846  103175 command_runner.go:130] > # conmon = ""
	I1221 18:23:45.579858  103175 command_runner.go:130] > # Cgroup setting for conmon
	I1221 18:23:45.579874  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1221 18:23:45.579884  103175 command_runner.go:130] > conmon_cgroup = "pod"
	I1221 18:23:45.579895  103175 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1221 18:23:45.579906  103175 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1221 18:23:45.579921  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1221 18:23:45.579929  103175 command_runner.go:130] > # conmon_env = [
	I1221 18:23:45.579934  103175 command_runner.go:130] > # ]
	I1221 18:23:45.579941  103175 command_runner.go:130] > # Additional environment variables to set for all the
	I1221 18:23:45.579948  103175 command_runner.go:130] > # containers. These are overridden if set in the
	I1221 18:23:45.579958  103175 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1221 18:23:45.579968  103175 command_runner.go:130] > # default_env = [
	I1221 18:23:45.579973  103175 command_runner.go:130] > # ]
	I1221 18:23:45.579982  103175 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1221 18:23:45.579989  103175 command_runner.go:130] > # selinux = false
	I1221 18:23:45.580000  103175 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1221 18:23:45.580010  103175 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1221 18:23:45.580020  103175 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1221 18:23:45.580027  103175 command_runner.go:130] > # seccomp_profile = ""
	I1221 18:23:45.580036  103175 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1221 18:23:45.580046  103175 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1221 18:23:45.580055  103175 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1221 18:23:45.580059  103175 command_runner.go:130] > # which might increase security.
	I1221 18:23:45.580066  103175 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1221 18:23:45.580076  103175 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1221 18:23:45.580086  103175 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1221 18:23:45.580097  103175 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1221 18:23:45.580108  103175 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1221 18:23:45.580121  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:23:45.580129  103175 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1221 18:23:45.580141  103175 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1221 18:23:45.580145  103175 command_runner.go:130] > # the cgroup blockio controller.
	I1221 18:23:45.580149  103175 command_runner.go:130] > # blockio_config_file = ""
	I1221 18:23:45.580159  103175 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1221 18:23:45.580167  103175 command_runner.go:130] > # irqbalance daemon.
	I1221 18:23:45.580176  103175 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1221 18:23:45.580187  103175 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1221 18:23:45.580196  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:23:45.580202  103175 command_runner.go:130] > # rdt_config_file = ""
	I1221 18:23:45.580211  103175 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1221 18:23:45.580218  103175 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1221 18:23:45.580228  103175 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1221 18:23:45.580232  103175 command_runner.go:130] > # separate_pull_cgroup = ""
	I1221 18:23:45.580238  103175 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1221 18:23:45.580248  103175 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1221 18:23:45.580256  103175 command_runner.go:130] > # will be added.
	I1221 18:23:45.580268  103175 command_runner.go:130] > # default_capabilities = [
	I1221 18:23:45.580274  103175 command_runner.go:130] > # 	"CHOWN",
	I1221 18:23:45.580281  103175 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1221 18:23:45.580288  103175 command_runner.go:130] > # 	"FSETID",
	I1221 18:23:45.580296  103175 command_runner.go:130] > # 	"FOWNER",
	I1221 18:23:45.580302  103175 command_runner.go:130] > # 	"SETGID",
	I1221 18:23:45.580309  103175 command_runner.go:130] > # 	"SETUID",
	I1221 18:23:45.580314  103175 command_runner.go:130] > # 	"SETPCAP",
	I1221 18:23:45.580318  103175 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1221 18:23:45.580321  103175 command_runner.go:130] > # 	"KILL",
	I1221 18:23:45.580325  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580338  103175 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1221 18:23:45.580350  103175 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1221 18:23:45.580358  103175 command_runner.go:130] > # add_inheritable_capabilities = true
	I1221 18:23:45.580368  103175 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1221 18:23:45.580378  103175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1221 18:23:45.580384  103175 command_runner.go:130] > # default_sysctls = [
	I1221 18:23:45.580390  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580401  103175 command_runner.go:130] > # List of devices on the host that a
	I1221 18:23:45.580407  103175 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1221 18:23:45.580413  103175 command_runner.go:130] > # allowed_devices = [
	I1221 18:23:45.580419  103175 command_runner.go:130] > # 	"/dev/fuse",
	I1221 18:23:45.580425  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580433  103175 command_runner.go:130] > # List of additional devices. specified as
	I1221 18:23:45.580474  103175 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1221 18:23:45.580483  103175 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1221 18:23:45.580490  103175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1221 18:23:45.580497  103175 command_runner.go:130] > # additional_devices = [
	I1221 18:23:45.580503  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580512  103175 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1221 18:23:45.580521  103175 command_runner.go:130] > # cdi_spec_dirs = [
	I1221 18:23:45.580528  103175 command_runner.go:130] > # 	"/etc/cdi",
	I1221 18:23:45.580534  103175 command_runner.go:130] > # 	"/var/run/cdi",
	I1221 18:23:45.580540  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580551  103175 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1221 18:23:45.580561  103175 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1221 18:23:45.580570  103175 command_runner.go:130] > # Defaults to false.
	I1221 18:23:45.580576  103175 command_runner.go:130] > # device_ownership_from_security_context = false
	I1221 18:23:45.580582  103175 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1221 18:23:45.580592  103175 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1221 18:23:45.580604  103175 command_runner.go:130] > # hooks_dir = [
	I1221 18:23:45.580612  103175 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1221 18:23:45.580618  103175 command_runner.go:130] > # ]
	I1221 18:23:45.580628  103175 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1221 18:23:45.580638  103175 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1221 18:23:45.580647  103175 command_runner.go:130] > # its default mounts from the following two files:
	I1221 18:23:45.580652  103175 command_runner.go:130] > #
	I1221 18:23:45.580661  103175 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1221 18:23:45.580667  103175 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1221 18:23:45.580676  103175 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1221 18:23:45.580682  103175 command_runner.go:130] > #
	I1221 18:23:45.580693  103175 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1221 18:23:45.580703  103175 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1221 18:23:45.580714  103175 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1221 18:23:45.580726  103175 command_runner.go:130] > #      only add mounts it finds in this file.
	I1221 18:23:45.580732  103175 command_runner.go:130] > #
	I1221 18:23:45.580739  103175 command_runner.go:130] > # default_mounts_file = ""
	I1221 18:23:45.580745  103175 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1221 18:23:45.580752  103175 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1221 18:23:45.580758  103175 command_runner.go:130] > # pids_limit = 0
	I1221 18:23:45.580769  103175 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1221 18:23:45.580780  103175 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1221 18:23:45.580791  103175 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1221 18:23:45.580804  103175 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1221 18:23:45.580811  103175 command_runner.go:130] > # log_size_max = -1
	I1221 18:23:45.580822  103175 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1221 18:23:45.580830  103175 command_runner.go:130] > # log_to_journald = false
	I1221 18:23:45.580836  103175 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1221 18:23:45.580842  103175 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1221 18:23:45.580851  103175 command_runner.go:130] > # Path to directory for container attach sockets.
	I1221 18:23:45.580861  103175 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1221 18:23:45.580870  103175 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1221 18:23:45.580882  103175 command_runner.go:130] > # bind_mount_prefix = ""
	I1221 18:23:45.580891  103175 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1221 18:23:45.580898  103175 command_runner.go:130] > # read_only = false
	I1221 18:23:45.580908  103175 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1221 18:23:45.580916  103175 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1221 18:23:45.580920  103175 command_runner.go:130] > # live configuration reload.
	I1221 18:23:45.580923  103175 command_runner.go:130] > # log_level = "info"
	I1221 18:23:45.580933  103175 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1221 18:23:45.580941  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:23:45.580948  103175 command_runner.go:130] > # log_filter = ""
	I1221 18:23:45.580961  103175 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1221 18:23:45.580972  103175 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1221 18:23:45.580980  103175 command_runner.go:130] > # separated by comma.
	I1221 18:23:45.580987  103175 command_runner.go:130] > # uid_mappings = ""
	I1221 18:23:45.580996  103175 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1221 18:23:45.581004  103175 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1221 18:23:45.581008  103175 command_runner.go:130] > # separated by comma.
	I1221 18:23:45.581013  103175 command_runner.go:130] > # gid_mappings = ""
	I1221 18:23:45.581026  103175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1221 18:23:45.581037  103175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1221 18:23:45.581048  103175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1221 18:23:45.581055  103175 command_runner.go:130] > # minimum_mappable_uid = -1
	I1221 18:23:45.581066  103175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1221 18:23:45.581076  103175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1221 18:23:45.581099  103175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1221 18:23:45.581111  103175 command_runner.go:130] > # minimum_mappable_gid = -1
	I1221 18:23:45.581122  103175 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1221 18:23:45.581132  103175 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1221 18:23:45.581142  103175 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1221 18:23:45.581149  103175 command_runner.go:130] > # ctr_stop_timeout = 30
	I1221 18:23:45.581159  103175 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1221 18:23:45.581171  103175 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1221 18:23:45.581178  103175 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1221 18:23:45.581184  103175 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1221 18:23:45.581190  103175 command_runner.go:130] > # drop_infra_ctr = true
	I1221 18:23:45.581201  103175 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1221 18:23:45.581213  103175 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1221 18:23:45.581225  103175 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1221 18:23:45.581245  103175 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1221 18:23:45.581256  103175 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1221 18:23:45.581265  103175 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1221 18:23:45.581272  103175 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1221 18:23:45.581284  103175 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1221 18:23:45.581290  103175 command_runner.go:130] > # pinns_path = ""
	I1221 18:23:45.581300  103175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1221 18:23:45.581307  103175 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1221 18:23:45.581315  103175 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1221 18:23:45.581322  103175 command_runner.go:130] > # default_runtime = "runc"
	I1221 18:23:45.581332  103175 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1221 18:23:45.581345  103175 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1221 18:23:45.581359  103175 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1221 18:23:45.581368  103175 command_runner.go:130] > # creation as a file is not desired either.
	I1221 18:23:45.581382  103175 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1221 18:23:45.581389  103175 command_runner.go:130] > # the hostname is being managed dynamically.
	I1221 18:23:45.581397  103175 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1221 18:23:45.581402  103175 command_runner.go:130] > # ]
	I1221 18:23:45.581413  103175 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1221 18:23:45.581424  103175 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1221 18:23:45.581436  103175 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1221 18:23:45.581446  103175 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1221 18:23:45.581452  103175 command_runner.go:130] > #
	I1221 18:23:45.581460  103175 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1221 18:23:45.581469  103175 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1221 18:23:45.581475  103175 command_runner.go:130] > #  runtime_type = "oci"
	I1221 18:23:45.581480  103175 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1221 18:23:45.581484  103175 command_runner.go:130] > #  privileged_without_host_devices = false
	I1221 18:23:45.581490  103175 command_runner.go:130] > #  allowed_annotations = []
	I1221 18:23:45.581496  103175 command_runner.go:130] > # Where:
	I1221 18:23:45.581506  103175 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1221 18:23:45.581519  103175 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1221 18:23:45.581530  103175 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1221 18:23:45.581540  103175 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1221 18:23:45.581549  103175 command_runner.go:130] > #   in $PATH.
	I1221 18:23:45.581559  103175 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1221 18:23:45.581565  103175 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1221 18:23:45.581574  103175 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1221 18:23:45.581580  103175 command_runner.go:130] > #   state.
	I1221 18:23:45.581591  103175 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1221 18:23:45.581605  103175 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1221 18:23:45.581616  103175 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1221 18:23:45.581625  103175 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1221 18:23:45.581636  103175 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1221 18:23:45.581646  103175 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1221 18:23:45.581651  103175 command_runner.go:130] > #   The currently recognized values are:
	I1221 18:23:45.581661  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1221 18:23:45.581673  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1221 18:23:45.581683  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1221 18:23:45.581693  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1221 18:23:45.581706  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1221 18:23:45.581717  103175 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1221 18:23:45.581733  103175 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1221 18:23:45.581740  103175 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1221 18:23:45.581746  103175 command_runner.go:130] > #   should be moved to the container's cgroup
	I1221 18:23:45.581753  103175 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1221 18:23:45.581762  103175 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1221 18:23:45.581769  103175 command_runner.go:130] > runtime_type = "oci"
	I1221 18:23:45.581776  103175 command_runner.go:130] > runtime_root = "/run/runc"
	I1221 18:23:45.581783  103175 command_runner.go:130] > runtime_config_path = ""
	I1221 18:23:45.581789  103175 command_runner.go:130] > monitor_path = ""
	I1221 18:23:45.581796  103175 command_runner.go:130] > monitor_cgroup = ""
	I1221 18:23:45.581803  103175 command_runner.go:130] > monitor_exec_cgroup = ""
	I1221 18:23:45.581862  103175 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1221 18:23:45.581870  103175 command_runner.go:130] > # running containers
	I1221 18:23:45.581877  103175 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1221 18:23:45.581888  103175 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1221 18:23:45.581901  103175 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1221 18:23:45.581909  103175 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1221 18:23:45.581915  103175 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1221 18:23:45.581923  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1221 18:23:45.581930  103175 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1221 18:23:45.581939  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1221 18:23:45.581947  103175 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1221 18:23:45.581954  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1221 18:23:45.581965  103175 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1221 18:23:45.581974  103175 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1221 18:23:45.581985  103175 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1221 18:23:45.581996  103175 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1221 18:23:45.582004  103175 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1221 18:23:45.582012  103175 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1221 18:23:45.582028  103175 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1221 18:23:45.582044  103175 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1221 18:23:45.582053  103175 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1221 18:23:45.582065  103175 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1221 18:23:45.582071  103175 command_runner.go:130] > # Example:
	I1221 18:23:45.582079  103175 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1221 18:23:45.582084  103175 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1221 18:23:45.582094  103175 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1221 18:23:45.582103  103175 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1221 18:23:45.582109  103175 command_runner.go:130] > # cpuset = 0
	I1221 18:23:45.582116  103175 command_runner.go:130] > # cpushares = "0-1"
	I1221 18:23:45.582122  103175 command_runner.go:130] > # Where:
	I1221 18:23:45.582130  103175 command_runner.go:130] > # The workload name is workload-type.
	I1221 18:23:45.582141  103175 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1221 18:23:45.582150  103175 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1221 18:23:45.582160  103175 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1221 18:23:45.582171  103175 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1221 18:23:45.582176  103175 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1221 18:23:45.582180  103175 command_runner.go:130] > # 
	I1221 18:23:45.582196  103175 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1221 18:23:45.582202  103175 command_runner.go:130] > #
	I1221 18:23:45.582215  103175 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1221 18:23:45.582225  103175 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1221 18:23:45.582236  103175 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1221 18:23:45.582246  103175 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1221 18:23:45.582257  103175 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1221 18:23:45.582260  103175 command_runner.go:130] > [crio.image]
	I1221 18:23:45.582268  103175 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1221 18:23:45.582275  103175 command_runner.go:130] > # default_transport = "docker://"
	I1221 18:23:45.582285  103175 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1221 18:23:45.582296  103175 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1221 18:23:45.582303  103175 command_runner.go:130] > # global_auth_file = ""
	I1221 18:23:45.582312  103175 command_runner.go:130] > # The image used to instantiate infra containers.
	I1221 18:23:45.582322  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:23:45.582331  103175 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1221 18:23:45.582341  103175 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1221 18:23:45.582348  103175 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1221 18:23:45.582355  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:23:45.582361  103175 command_runner.go:130] > # pause_image_auth_file = ""
	I1221 18:23:45.582372  103175 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1221 18:23:45.582382  103175 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1221 18:23:45.582392  103175 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1221 18:23:45.582403  103175 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1221 18:23:45.582413  103175 command_runner.go:130] > # pause_command = "/pause"
	I1221 18:23:45.582423  103175 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1221 18:23:45.582431  103175 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1221 18:23:45.582436  103175 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1221 18:23:45.582446  103175 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1221 18:23:45.582456  103175 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1221 18:23:45.582463  103175 command_runner.go:130] > # signature_policy = ""
	I1221 18:23:45.582477  103175 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1221 18:23:45.582487  103175 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1221 18:23:45.582494  103175 command_runner.go:130] > # changing them here.
	I1221 18:23:45.582501  103175 command_runner.go:130] > # insecure_registries = [
	I1221 18:23:45.582507  103175 command_runner.go:130] > # ]
	I1221 18:23:45.582515  103175 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1221 18:23:45.582520  103175 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1221 18:23:45.582529  103175 command_runner.go:130] > # image_volumes = "mkdir"
	I1221 18:23:45.582538  103175 command_runner.go:130] > # Temporary directory to use for storing big files
	I1221 18:23:45.582546  103175 command_runner.go:130] > # big_files_temporary_dir = ""
	I1221 18:23:45.582556  103175 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1221 18:23:45.582565  103175 command_runner.go:130] > # CNI plugins.
	I1221 18:23:45.582571  103175 command_runner.go:130] > [crio.network]
	I1221 18:23:45.582582  103175 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1221 18:23:45.582591  103175 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1221 18:23:45.582601  103175 command_runner.go:130] > # cni_default_network = ""
	I1221 18:23:45.582607  103175 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1221 18:23:45.582613  103175 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1221 18:23:45.582623  103175 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1221 18:23:45.582630  103175 command_runner.go:130] > # plugin_dirs = [
	I1221 18:23:45.582636  103175 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1221 18:23:45.582642  103175 command_runner.go:130] > # ]
	I1221 18:23:45.582652  103175 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1221 18:23:45.582658  103175 command_runner.go:130] > [crio.metrics]
	I1221 18:23:45.582666  103175 command_runner.go:130] > # Globally enable or disable metrics support.
	I1221 18:23:45.582675  103175 command_runner.go:130] > # enable_metrics = false
	I1221 18:23:45.582683  103175 command_runner.go:130] > # Specify enabled metrics collectors.
	I1221 18:23:45.582688  103175 command_runner.go:130] > # Per default all metrics are enabled.
	I1221 18:23:45.582693  103175 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1221 18:23:45.582707  103175 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1221 18:23:45.582718  103175 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1221 18:23:45.582725  103175 command_runner.go:130] > # metrics_collectors = [
	I1221 18:23:45.582732  103175 command_runner.go:130] > # 	"operations",
	I1221 18:23:45.582740  103175 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1221 18:23:45.582748  103175 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1221 18:23:45.582755  103175 command_runner.go:130] > # 	"operations_errors",
	I1221 18:23:45.582762  103175 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1221 18:23:45.582768  103175 command_runner.go:130] > # 	"image_pulls_by_name",
	I1221 18:23:45.582773  103175 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1221 18:23:45.582777  103175 command_runner.go:130] > # 	"image_pulls_failures",
	I1221 18:23:45.582783  103175 command_runner.go:130] > # 	"image_pulls_successes",
	I1221 18:23:45.582790  103175 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1221 18:23:45.582797  103175 command_runner.go:130] > # 	"image_layer_reuse",
	I1221 18:23:45.582804  103175 command_runner.go:130] > # 	"containers_oom_total",
	I1221 18:23:45.582811  103175 command_runner.go:130] > # 	"containers_oom",
	I1221 18:23:45.582818  103175 command_runner.go:130] > # 	"processes_defunct",
	I1221 18:23:45.582825  103175 command_runner.go:130] > # 	"operations_total",
	I1221 18:23:45.582835  103175 command_runner.go:130] > # 	"operations_latency_seconds",
	I1221 18:23:45.582843  103175 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1221 18:23:45.582850  103175 command_runner.go:130] > # 	"operations_errors_total",
	I1221 18:23:45.582857  103175 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1221 18:23:45.582863  103175 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1221 18:23:45.582872  103175 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1221 18:23:45.582883  103175 command_runner.go:130] > # 	"image_pulls_success_total",
	I1221 18:23:45.582891  103175 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1221 18:23:45.582898  103175 command_runner.go:130] > # 	"containers_oom_count_total",
	I1221 18:23:45.582904  103175 command_runner.go:130] > # ]
	I1221 18:23:45.582912  103175 command_runner.go:130] > # The port on which the metrics server will listen.
	I1221 18:23:45.582919  103175 command_runner.go:130] > # metrics_port = 9090
	I1221 18:23:45.582927  103175 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1221 18:23:45.582934  103175 command_runner.go:130] > # metrics_socket = ""
	I1221 18:23:45.582942  103175 command_runner.go:130] > # The certificate for the secure metrics server.
	I1221 18:23:45.582948  103175 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1221 18:23:45.582956  103175 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1221 18:23:45.582964  103175 command_runner.go:130] > # certificate on any modification event.
	I1221 18:23:45.582974  103175 command_runner.go:130] > # metrics_cert = ""
	I1221 18:23:45.582984  103175 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1221 18:23:45.582992  103175 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1221 18:23:45.582999  103175 command_runner.go:130] > # metrics_key = ""
	I1221 18:23:45.583010  103175 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1221 18:23:45.583016  103175 command_runner.go:130] > [crio.tracing]
	I1221 18:23:45.583025  103175 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1221 18:23:45.583030  103175 command_runner.go:130] > # enable_tracing = false
	I1221 18:23:45.583036  103175 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1221 18:23:45.583040  103175 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1221 18:23:45.583046  103175 command_runner.go:130] > # Number of samples to collect per million spans.
	I1221 18:23:45.583051  103175 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1221 18:23:45.583056  103175 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1221 18:23:45.583061  103175 command_runner.go:130] > [crio.stats]
	I1221 18:23:45.583071  103175 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1221 18:23:45.583080  103175 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1221 18:23:45.583087  103175 command_runner.go:130] > # stats_collection_period = 0
	I1221 18:23:45.583124  103175 command_runner.go:130] ! time="2023-12-21 18:23:45.576860974Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1221 18:23:45.583141  103175 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1221 18:23:45.583209  103175 cni.go:84] Creating CNI manager for ""
	I1221 18:23:45.583219  103175 cni.go:136] 1 nodes found, recommending kindnet
	I1221 18:23:45.583239  103175 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:23:45.583262  103175 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-186629 NodeName:multinode-186629 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 18:23:45.583405  103175 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-186629"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:23:45.583469  103175 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-186629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:23:45.583508  103175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1221 18:23:45.590419  103175 command_runner.go:130] > kubeadm
	I1221 18:23:45.590434  103175 command_runner.go:130] > kubectl
	I1221 18:23:45.590438  103175 command_runner.go:130] > kubelet
	I1221 18:23:45.591099  103175 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:23:45.591158  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1221 18:23:45.598427  103175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1221 18:23:45.613557  103175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 18:23:45.628990  103175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1221 18:23:45.644317  103175 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:23:45.647337  103175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:23:45.656387  103175 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629 for IP: 192.168.58.2
	I1221 18:23:45.656412  103175 certs.go:190] acquiring lock for shared ca certs: {Name:mk1a19dbb52a881fd398c5196f3505713dce7712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.656548  103175 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key
	I1221 18:23:45.656586  103175 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key
	I1221 18:23:45.656628  103175 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key
	I1221 18:23:45.656644  103175 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt with IP's: []
	I1221 18:23:45.762249  103175 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt ...
	I1221 18:23:45.762277  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt: {Name:mkbb3c69f392c039c2c5c4bad142d299e17ca91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.762431  103175 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key ...
	I1221 18:23:45.762444  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key: {Name:mk38e6ebdd0c8f5efc3ceae7d2fa74eece4cbbdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.762515  103175 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key.cee25041
	I1221 18:23:45.762529  103175 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1221 18:23:45.837334  103175 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt.cee25041 ...
	I1221 18:23:45.837364  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt.cee25041: {Name:mk7379bcd8d11672dc9d32951c867db5b2f1e8ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.837507  103175 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key.cee25041 ...
	I1221 18:23:45.837519  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key.cee25041: {Name:mk5a61367222c8a69e0d3aad9da04fd003a58cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.837589  103175 certs.go:337] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt
	I1221 18:23:45.837670  103175 certs.go:341] copying /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key
	I1221 18:23:45.837730  103175 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.key
	I1221 18:23:45.837743  103175 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.crt with IP's: []
	I1221 18:23:45.911738  103175 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.crt ...
	I1221 18:23:45.911767  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.crt: {Name:mk73db66e97f20a4b0f5367d24210e5ac70db42c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.911910  103175 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.key ...
	I1221 18:23:45.911927  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.key: {Name:mkf792d79b76a6332ad4829de408156371055c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:23:45.911992  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1221 18:23:45.912008  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1221 18:23:45.912017  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1221 18:23:45.912029  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1221 18:23:45.912040  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1221 18:23:45.912055  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1221 18:23:45.912070  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1221 18:23:45.912082  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1221 18:23:45.912131  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem (1338 bytes)
	W1221 18:23:45.912161  103175 certs.go:433] ignoring /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664_empty.pem, impossibly tiny 0 bytes
	I1221 18:23:45.912171  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem (1679 bytes)
	I1221 18:23:45.912191  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem (1078 bytes)
	I1221 18:23:45.912216  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:23:45.912241  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem (1679 bytes)
	I1221 18:23:45.912284  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:23:45.912308  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:23:45.912321  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem -> /usr/share/ca-certificates/16664.pem
	I1221 18:23:45.912335  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /usr/share/ca-certificates/166642.pem
	I1221 18:23:45.912930  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1221 18:23:45.933797  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1221 18:23:45.953512  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1221 18:23:45.972440  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1221 18:23:45.992151  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:23:46.011789  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 18:23:46.031746  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:23:46.051978  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:23:46.072190  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:23:46.092421  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem --> /usr/share/ca-certificates/16664.pem (1338 bytes)
	I1221 18:23:46.112392  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /usr/share/ca-certificates/166642.pem (1708 bytes)
	I1221 18:23:46.132344  103175 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1221 18:23:46.147264  103175 ssh_runner.go:195] Run: openssl version
	I1221 18:23:46.151843  103175 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1221 18:23:46.152076  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:23:46.159872  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:23:46.162799  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:23:46.162829  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:23:46.162865  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:23:46.168431  103175 command_runner.go:130] > b5213941
	I1221 18:23:46.168648  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:23:46.176287  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16664.pem && ln -fs /usr/share/ca-certificates/16664.pem /etc/ssl/certs/16664.pem"
	I1221 18:23:46.183832  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16664.pem
	I1221 18:23:46.186551  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 21 18:11 /usr/share/ca-certificates/16664.pem
	I1221 18:23:46.186659  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 21 18:11 /usr/share/ca-certificates/16664.pem
	I1221 18:23:46.186688  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16664.pem
	I1221 18:23:46.192316  103175 command_runner.go:130] > 51391683
	I1221 18:23:46.192517  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16664.pem /etc/ssl/certs/51391683.0"
	I1221 18:23:46.201956  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166642.pem && ln -fs /usr/share/ca-certificates/166642.pem /etc/ssl/certs/166642.pem"
	I1221 18:23:46.210051  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166642.pem
	I1221 18:23:46.212811  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 21 18:11 /usr/share/ca-certificates/166642.pem
	I1221 18:23:46.212838  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 21 18:11 /usr/share/ca-certificates/166642.pem
	I1221 18:23:46.212872  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166642.pem
	I1221 18:23:46.218513  103175 command_runner.go:130] > 3ec20f2e
	I1221 18:23:46.218755  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166642.pem /etc/ssl/certs/3ec20f2e.0"
	I1221 18:23:46.226508  103175 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:23:46.229206  103175 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:23:46.229279  103175 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:23:46.229323  103175 kubeadm.go:404] StartCluster: {Name:multinode-186629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:23:46.229393  103175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1221 18:23:46.229426  103175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1221 18:23:46.264643  103175 cri.go:89] found id: ""
	I1221 18:23:46.264712  103175 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1221 18:23:46.272466  103175 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1221 18:23:46.272488  103175 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1221 18:23:46.272495  103175 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1221 18:23:46.272554  103175 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1221 18:23:46.280161  103175 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1221 18:23:46.280216  103175 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1221 18:23:46.286854  103175 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1221 18:23:46.286881  103175 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1221 18:23:46.286893  103175 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1221 18:23:46.286905  103175 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:23:46.287505  103175 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1221 18:23:46.287548  103175 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1221 18:23:46.329377  103175 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1221 18:23:46.329412  103175 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1221 18:23:46.329451  103175 kubeadm.go:322] [preflight] Running pre-flight checks
	I1221 18:23:46.329455  103175 command_runner.go:130] > [preflight] Running pre-flight checks
	I1221 18:23:46.362564  103175 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:23:46.362595  103175 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:23:46.362674  103175 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1221 18:23:46.362684  103175 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1221 18:23:46.362734  103175 kubeadm.go:322] OS: Linux
	I1221 18:23:46.362745  103175 command_runner.go:130] > OS: Linux
	I1221 18:23:46.362806  103175 kubeadm.go:322] CGROUPS_CPU: enabled
	I1221 18:23:46.362822  103175 command_runner.go:130] > CGROUPS_CPU: enabled
	I1221 18:23:46.362898  103175 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1221 18:23:46.362909  103175 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1221 18:23:46.362973  103175 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1221 18:23:46.362982  103175 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1221 18:23:46.363038  103175 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1221 18:23:46.363058  103175 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1221 18:23:46.363139  103175 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1221 18:23:46.363153  103175 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1221 18:23:46.363226  103175 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1221 18:23:46.363237  103175 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1221 18:23:46.363310  103175 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1221 18:23:46.363323  103175 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1221 18:23:46.363368  103175 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1221 18:23:46.363381  103175 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1221 18:23:46.363442  103175 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1221 18:23:46.363454  103175 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1221 18:23:46.421601  103175 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:23:46.421629  103175 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1221 18:23:46.421740  103175 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:23:46.421757  103175 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1221 18:23:46.421862  103175 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:23:46.421871  103175 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1221 18:23:46.605420  103175 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:23:46.607222  103175 out.go:204]   - Generating certificates and keys ...
	I1221 18:23:46.605467  103175 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1221 18:23:46.607330  103175 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1221 18:23:46.607345  103175 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1221 18:23:46.607460  103175 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1221 18:23:46.607482  103175 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1221 18:23:46.683709  103175 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:23:46.683757  103175 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1221 18:23:46.747096  103175 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:23:46.747139  103175 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1221 18:23:46.941501  103175 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1221 18:23:46.941534  103175 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1221 18:23:47.304251  103175 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1221 18:23:47.304278  103175 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1221 18:23:47.476803  103175 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1221 18:23:47.476829  103175 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1221 18:23:47.476950  103175 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-186629] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1221 18:23:47.476959  103175 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-186629] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1221 18:23:47.640214  103175 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1221 18:23:47.640245  103175 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1221 18:23:47.640424  103175 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-186629] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1221 18:23:47.640447  103175 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-186629] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1221 18:23:47.731565  103175 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:23:47.731597  103175 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1221 18:23:47.850907  103175 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:23:47.850942  103175 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1221 18:23:47.983773  103175 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1221 18:23:47.983815  103175 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1221 18:23:47.983901  103175 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:23:47.983916  103175 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1221 18:23:48.130888  103175 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:23:48.130905  103175 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1221 18:23:48.257517  103175 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:23:48.257550  103175 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1221 18:23:48.384647  103175 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:23:48.384675  103175 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1221 18:23:48.602045  103175 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:23:48.602074  103175 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1221 18:23:48.602478  103175 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:23:48.602501  103175 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1221 18:23:48.604756  103175 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:23:48.606984  103175 out.go:204]   - Booting up control plane ...
	I1221 18:23:48.604846  103175 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1221 18:23:48.607113  103175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:23:48.607137  103175 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1221 18:23:48.607233  103175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:23:48.607248  103175 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1221 18:23:48.607327  103175 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:23:48.607339  103175 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1221 18:23:48.615741  103175 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:23:48.615764  103175 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:23:48.616405  103175 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:23:48.616418  103175 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:23:48.616474  103175 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1221 18:23:48.616488  103175 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1221 18:23:48.690475  103175 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:23:48.690513  103175 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1221 18:23:53.692742  103175 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002335 seconds
	I1221 18:23:53.692784  103175 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002335 seconds
	I1221 18:23:53.692959  103175 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:23:53.692979  103175 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1221 18:23:53.705512  103175 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:23:53.705541  103175 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1221 18:23:54.226677  103175 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:23:54.226718  103175 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1221 18:23:54.226920  103175 kubeadm.go:322] [mark-control-plane] Marking the node multinode-186629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 18:23:54.226944  103175 command_runner.go:130] > [mark-control-plane] Marking the node multinode-186629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1221 18:23:54.735599  103175 kubeadm.go:322] [bootstrap-token] Using token: rywi2e.ti69wk7g5y8daet4
	I1221 18:23:54.737079  103175 out.go:204]   - Configuring RBAC rules ...
	I1221 18:23:54.735657  103175 command_runner.go:130] > [bootstrap-token] Using token: rywi2e.ti69wk7g5y8daet4
	I1221 18:23:54.737215  103175 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:23:54.737251  103175 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1221 18:23:54.740399  103175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:23:54.740420  103175 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1221 18:23:54.747040  103175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:23:54.747060  103175 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1221 18:23:54.749594  103175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:23:54.749624  103175 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1221 18:23:54.752752  103175 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:23:54.752770  103175 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1221 18:23:54.755136  103175 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:23:54.755150  103175 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1221 18:23:54.763835  103175 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:23:54.763857  103175 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1221 18:23:54.948038  103175 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1221 18:23:54.948062  103175 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1221 18:23:55.144986  103175 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1221 18:23:55.145013  103175 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1221 18:23:55.145780  103175 kubeadm.go:322] 
	I1221 18:23:55.145905  103175 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1221 18:23:55.145927  103175 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1221 18:23:55.145953  103175 kubeadm.go:322] 
	I1221 18:23:55.146068  103175 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1221 18:23:55.146078  103175 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1221 18:23:55.146091  103175 kubeadm.go:322] 
	I1221 18:23:55.146114  103175 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1221 18:23:55.146119  103175 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1221 18:23:55.146196  103175 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:23:55.146207  103175 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1221 18:23:55.146271  103175 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:23:55.146282  103175 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1221 18:23:55.146292  103175 kubeadm.go:322] 
	I1221 18:23:55.146369  103175 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1221 18:23:55.146379  103175 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1221 18:23:55.146384  103175 kubeadm.go:322] 
	I1221 18:23:55.146448  103175 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 18:23:55.146457  103175 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1221 18:23:55.146463  103175 kubeadm.go:322] 
	I1221 18:23:55.146541  103175 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1221 18:23:55.146554  103175 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1221 18:23:55.146656  103175 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:23:55.146667  103175 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1221 18:23:55.146757  103175 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:23:55.146778  103175 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1221 18:23:55.146785  103175 kubeadm.go:322] 
	I1221 18:23:55.146917  103175 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:23:55.146928  103175 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1221 18:23:55.146991  103175 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1221 18:23:55.146998  103175 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1221 18:23:55.147001  103175 kubeadm.go:322] 
	I1221 18:23:55.147077  103175 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token rywi2e.ti69wk7g5y8daet4 \
	I1221 18:23:55.147087  103175 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rywi2e.ti69wk7g5y8daet4 \
	I1221 18:23:55.147211  103175 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b \
	I1221 18:23:55.147221  103175 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b \
	I1221 18:23:55.147250  103175 command_runner.go:130] > 	--control-plane 
	I1221 18:23:55.147258  103175 kubeadm.go:322] 	--control-plane 
	I1221 18:23:55.147264  103175 kubeadm.go:322] 
	I1221 18:23:55.147378  103175 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:23:55.147386  103175 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1221 18:23:55.147394  103175 kubeadm.go:322] 
	I1221 18:23:55.147498  103175 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rywi2e.ti69wk7g5y8daet4 \
	I1221 18:23:55.147508  103175 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rywi2e.ti69wk7g5y8daet4 \
	I1221 18:23:55.147620  103175 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b 
	I1221 18:23:55.147631  103175 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b 
	I1221 18:23:55.149307  103175 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1221 18:23:55.149323  103175 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1221 18:23:55.149451  103175 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:23:55.149465  103175 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:23:55.149489  103175 cni.go:84] Creating CNI manager for ""
	I1221 18:23:55.149497  103175 cni.go:136] 1 nodes found, recommending kindnet
	I1221 18:23:55.151291  103175 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1221 18:23:55.152585  103175 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 18:23:55.155869  103175 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1221 18:23:55.155885  103175 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1221 18:23:55.155891  103175 command_runner.go:130] > Device: 37h/55d	Inode: 582225      Links: 1
	I1221 18:23:55.155897  103175 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:23:55.155904  103175 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1221 18:23:55.155909  103175 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1221 18:23:55.155914  103175 command_runner.go:130] > Change: 2023-12-21 18:04:50.938311966 +0000
	I1221 18:23:55.155918  103175 command_runner.go:130] >  Birth: 2023-12-21 18:04:50.914310172 +0000
	I1221 18:23:55.155953  103175 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1221 18:23:55.155965  103175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1221 18:23:55.171741  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 18:23:55.768920  103175 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1221 18:23:55.773521  103175 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1221 18:23:55.779378  103175 command_runner.go:130] > serviceaccount/kindnet created
	I1221 18:23:55.788477  103175 command_runner.go:130] > daemonset.apps/kindnet created
	I1221 18:23:55.792298  103175 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1221 18:23:55.792370  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=multinode-186629 minikube.k8s.io/updated_at=2023_12_21T18_23_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:55.792381  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:55.798845  103175 command_runner.go:130] > -16
	I1221 18:23:55.798879  103175 ops.go:34] apiserver oom_adj: -16
	I1221 18:23:55.854687  103175 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1221 18:23:55.885812  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:55.896302  103175 command_runner.go:130] > node/multinode-186629 labeled
	I1221 18:23:55.948411  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:56.385971  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:56.445503  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:56.885868  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:56.943550  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:57.386435  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:57.445416  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:57.886430  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:57.944450  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:58.386424  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:58.448906  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:58.886416  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:58.947985  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:59.386434  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:59.444472  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:23:59.886213  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:23:59.946040  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:00.385948  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:00.445991  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:00.886422  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:00.944106  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:01.385899  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:01.444008  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:01.886219  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:01.949441  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:02.385895  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:02.445111  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:02.886455  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:02.944336  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:03.386604  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:03.444960  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:03.886195  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:03.944936  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:04.386170  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:04.444297  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:04.886481  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:04.950015  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:05.386419  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:05.451526  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:05.886343  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:05.946224  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:06.386427  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:06.448128  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:06.886417  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:06.947396  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:07.385966  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:07.445926  103175 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1221 18:24:07.886190  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:07.946617  103175 command_runner.go:130] > NAME      SECRETS   AGE
	I1221 18:24:07.946646  103175 command_runner.go:130] > default   0         0s
	I1221 18:24:07.949536  103175 kubeadm.go:1088] duration metric: took 12.157228142s to wait for elevateKubeSystemPrivileges.
	I1221 18:24:07.949568  103175 kubeadm.go:406] StartCluster complete in 21.72024732s
	I1221 18:24:07.949590  103175 settings.go:142] acquiring lock: {Name:mk8e49e823ae84efe44355981045de15cdb79660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:24:07.949673  103175 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:07.950328  103175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/kubeconfig: {Name:mk377070c6d3dd4bc3f11638f8c446f488cf8c2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:24:07.950559  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1221 18:24:07.950723  103175 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1221 18:24:07.950828  103175 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:24:07.950833  103175 addons.go:69] Setting storage-provisioner=true in profile "multinode-186629"
	I1221 18:24:07.950854  103175 addons.go:237] Setting addon storage-provisioner=true in "multinode-186629"
	I1221 18:24:07.950881  103175 addons.go:69] Setting default-storageclass=true in profile "multinode-186629"
	I1221 18:24:07.950915  103175 host.go:66] Checking if "multinode-186629" exists ...
	I1221 18:24:07.950933  103175 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-186629"
	I1221 18:24:07.950892  103175 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:07.951320  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:24:07.951302  103175 kapi.go:59] client config for multinode-186629: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:24:07.951425  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:24:07.952111  103175 cert_rotation.go:137] Starting client certificate rotation controller
	I1221 18:24:07.952341  103175 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1221 18:24:07.952359  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:07.952371  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:07.952385  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:07.962305  103175 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1221 18:24:07.962334  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:07.962343  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:07.962350  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:07.962357  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:07.962365  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:07.962375  103175 round_trippers.go:580]     Content-Length: 291
	I1221 18:24:07.962386  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:07 GMT
	I1221 18:24:07.962398  103175 round_trippers.go:580]     Audit-Id: 41d95fc4-7b83-40bd-919f-d274b0f854f1
	I1221 18:24:07.962453  103175 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9b1538f2-152b-4663-899a-9076fafae97f","resourceVersion":"303","creationTimestamp":"2023-12-21T18:23:54Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1221 18:24:07.962938  103175 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9b1538f2-152b-4663-899a-9076fafae97f","resourceVersion":"303","creationTimestamp":"2023-12-21T18:23:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1221 18:24:07.963009  103175 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1221 18:24:07.963032  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:07.963046  103175 round_trippers.go:473]     Content-Type: application/json
	I1221 18:24:07.963058  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:07.963068  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:07.969705  103175 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1221 18:24:07.969763  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:07.969777  103175 round_trippers.go:580]     Content-Length: 291
	I1221 18:24:07.969787  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:07 GMT
	I1221 18:24:07.969796  103175 round_trippers.go:580]     Audit-Id: 54b41f2f-ed49-49f1-b18a-2406eca9a75e
	I1221 18:24:07.969808  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:07.969818  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:07.969838  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:07.969851  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:07.969883  103175 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9b1538f2-152b-4663-899a-9076fafae97f","resourceVersion":"315","creationTimestamp":"2023-12-21T18:23:54Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1221 18:24:07.970347  103175 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:07.970586  103175 kapi.go:59] client config for multinode-186629: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:24:07.970853  103175 addons.go:237] Setting addon default-storageclass=true in "multinode-186629"
	I1221 18:24:07.970899  103175 host.go:66] Checking if "multinode-186629" exists ...
	I1221 18:24:07.971356  103175 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:24:07.973735  103175 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1221 18:24:07.975226  103175 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:24:07.975246  103175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1221 18:24:07.975285  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:24:08.017417  103175 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1221 18:24:08.017443  103175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1221 18:24:08.017499  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:24:08.019393  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:24:08.035266  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:24:08.189015  103175 command_runner.go:130] > apiVersion: v1
	I1221 18:24:08.189040  103175 command_runner.go:130] > data:
	I1221 18:24:08.189047  103175 command_runner.go:130] >   Corefile: |
	I1221 18:24:08.189053  103175 command_runner.go:130] >     .:53 {
	I1221 18:24:08.189059  103175 command_runner.go:130] >         errors
	I1221 18:24:08.189066  103175 command_runner.go:130] >         health {
	I1221 18:24:08.189090  103175 command_runner.go:130] >            lameduck 5s
	I1221 18:24:08.189099  103175 command_runner.go:130] >         }
	I1221 18:24:08.189104  103175 command_runner.go:130] >         ready
	I1221 18:24:08.189114  103175 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1221 18:24:08.189124  103175 command_runner.go:130] >            pods insecure
	I1221 18:24:08.189132  103175 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1221 18:24:08.189142  103175 command_runner.go:130] >            ttl 30
	I1221 18:24:08.189147  103175 command_runner.go:130] >         }
	I1221 18:24:08.189156  103175 command_runner.go:130] >         prometheus :9153
	I1221 18:24:08.189164  103175 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1221 18:24:08.189177  103175 command_runner.go:130] >            max_concurrent 1000
	I1221 18:24:08.189183  103175 command_runner.go:130] >         }
	I1221 18:24:08.189188  103175 command_runner.go:130] >         cache 30
	I1221 18:24:08.189194  103175 command_runner.go:130] >         loop
	I1221 18:24:08.189200  103175 command_runner.go:130] >         reload
	I1221 18:24:08.189206  103175 command_runner.go:130] >         loadbalance
	I1221 18:24:08.189211  103175 command_runner.go:130] >     }
	I1221 18:24:08.189216  103175 command_runner.go:130] > kind: ConfigMap
	I1221 18:24:08.189222  103175 command_runner.go:130] > metadata:
	I1221 18:24:08.189249  103175 command_runner.go:130] >   creationTimestamp: "2023-12-21T18:23:54Z"
	I1221 18:24:08.189260  103175 command_runner.go:130] >   name: coredns
	I1221 18:24:08.189266  103175 command_runner.go:130] >   namespace: kube-system
	I1221 18:24:08.189272  103175 command_runner.go:130] >   resourceVersion: "229"
	I1221 18:24:08.189279  103175 command_runner.go:130] >   uid: 90235f85-25a9-4b2c-9935-9394540bf1bb
	I1221 18:24:08.189458  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1221 18:24:08.307327  103175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1221 18:24:08.307616  103175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1221 18:24:08.453060  103175 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1221 18:24:08.453084  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:08.453092  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:08.453099  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:08.486755  103175 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1221 18:24:08.486781  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:08.486792  103175 round_trippers.go:580]     Content-Length: 291
	I1221 18:24:08.486801  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:08 GMT
	I1221 18:24:08.486809  103175 round_trippers.go:580]     Audit-Id: 1da5a075-0c4f-4620-aab7-346196a74b80
	I1221 18:24:08.486817  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:08.486826  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:08.486834  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:08.486842  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:08.487133  103175 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9b1538f2-152b-4663-899a-9076fafae97f","resourceVersion":"357","creationTimestamp":"2023-12-21T18:23:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1221 18:24:08.487256  103175 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-186629" context rescaled to 1 replicas
	I1221 18:24:08.487286  103175 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1221 18:24:08.490158  103175 out.go:177] * Verifying Kubernetes components...
	I1221 18:24:08.491487  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:24:08.815013  103175 command_runner.go:130] > configmap/coredns replaced
	I1221 18:24:08.819576  103175 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1221 18:24:09.085760  103175 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1221 18:24:09.090578  103175 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1221 18:24:09.097529  103175 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1221 18:24:09.104686  103175 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1221 18:24:09.113596  103175 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1221 18:24:09.123006  103175 command_runner.go:130] > pod/storage-provisioner created
	I1221 18:24:09.128335  103175 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1221 18:24:09.128473  103175 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1221 18:24:09.128487  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:09.128498  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:09.128511  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:09.128814  103175 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:09.129036  103175 kapi.go:59] client config for multinode-186629: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:24:09.129274  103175 node_ready.go:35] waiting up to 6m0s for node "multinode-186629" to be "Ready" ...
	I1221 18:24:09.129359  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:09.129370  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:09.129380  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:09.129391  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:09.133824  103175 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1221 18:24:09.133854  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:09.133865  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:09.133874  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:09.133883  103175 round_trippers.go:580]     Content-Length: 1273
	I1221 18:24:09.133902  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:09 GMT
	I1221 18:24:09.133910  103175 round_trippers.go:580]     Audit-Id: abf4befc-32e7-4c5e-abee-8c2330225d40
	I1221 18:24:09.133919  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:09.133930  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:09.133968  103175 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"377"},"items":[{"metadata":{"name":"standard","uid":"8039d4de-0083-4921-b621-03ddf011b57e","resourceVersion":"365","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1221 18:24:09.134436  103175 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8039d4de-0083-4921-b621-03ddf011b57e","resourceVersion":"365","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1221 18:24:09.134497  103175 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1221 18:24:09.134508  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:09.134519  103175 round_trippers.go:473]     Content-Type: application/json
	I1221 18:24:09.134531  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:09.134540  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:09.134675  103175 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1221 18:24:09.134691  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:09.134700  103175 round_trippers.go:580]     Audit-Id: 21beec3a-96be-4641-bd8e-292c0fb11739
	I1221 18:24:09.134708  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:09.134717  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:09.134729  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:09.134741  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:09.134752  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:09 GMT
	I1221 18:24:09.134885  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:09.136924  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:09.136942  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:09.136952  103175 round_trippers.go:580]     Audit-Id: b0243207-0c30-4bc9-93f1-59caddef3bd6
	I1221 18:24:09.136961  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:09.136970  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:09.136981  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:09.136990  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:09.137001  103175 round_trippers.go:580]     Content-Length: 1220
	I1221 18:24:09.137011  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:09 GMT
	I1221 18:24:09.137037  103175 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8039d4de-0083-4921-b621-03ddf011b57e","resourceVersion":"365","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1221 18:24:09.138875  103175 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1221 18:24:09.140595  103175 addons.go:508] enable addons completed in 1.189888301s: enabled=[storage-provisioner default-storageclass]
	I1221 18:24:09.629550  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:09.629585  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:09.629596  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:09.629605  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:09.631871  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:09.631894  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:09.631902  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:09.631911  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:09.631919  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:09.631928  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:09.631937  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:09 GMT
	I1221 18:24:09.631947  103175 round_trippers.go:580]     Audit-Id: 93d2e78e-27ae-411f-96e7-9dc2bdeb4aab
	I1221 18:24:09.632044  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:10.130443  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:10.130465  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:10.130473  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:10.130484  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:10.132648  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:10.132671  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:10.132681  103175 round_trippers.go:580]     Audit-Id: b0079579-a5ff-4f2a-9e6b-c758aee6abcd
	I1221 18:24:10.132690  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:10.132699  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:10.132712  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:10.132721  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:10.132728  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:10 GMT
	I1221 18:24:10.132937  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:10.630485  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:10.630507  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:10.630514  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:10.630520  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:10.632950  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:10.632967  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:10.632974  103175 round_trippers.go:580]     Audit-Id: 20e34067-47e3-4a15-afda-bb702e8e7356
	I1221 18:24:10.632980  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:10.632984  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:10.632990  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:10.632994  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:10.632999  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:10 GMT
	I1221 18:24:10.633133  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:11.129477  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:11.129499  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:11.129507  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:11.129513  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:11.131748  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:11.131766  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:11.131772  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:11.131777  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:11 GMT
	I1221 18:24:11.131785  103175 round_trippers.go:580]     Audit-Id: 88692250-fa12-448f-8bb5-9d122ca9ca87
	I1221 18:24:11.131793  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:11.131800  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:11.131810  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:11.131935  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:11.132260  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:11.629709  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:11.629728  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:11.629736  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:11.629743  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:11.631831  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:11.631850  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:11.631857  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:11.631863  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:11.631869  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:11 GMT
	I1221 18:24:11.631876  103175 round_trippers.go:580]     Audit-Id: 7b623a6d-19d4-41fa-a6c8-3febeeb5d642
	I1221 18:24:11.631885  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:11.631895  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:11.632009  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:12.129536  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:12.129558  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:12.129565  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:12.129571  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:12.131628  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:12.131644  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:12.131652  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:12.131658  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:12.131663  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:12 GMT
	I1221 18:24:12.131668  103175 round_trippers.go:580]     Audit-Id: de1b4b2d-b3aa-4310-93b3-f6adce5048ad
	I1221 18:24:12.131673  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:12.131680  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:12.131825  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:12.630457  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:12.630480  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:12.630488  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:12.630494  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:12.632714  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:12.632740  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:12.632751  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:12.632759  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:12.632768  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:12.632776  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:12 GMT
	I1221 18:24:12.632784  103175 round_trippers.go:580]     Audit-Id: a3b652c1-6748-4df6-802d-28ed5624ae14
	I1221 18:24:12.632792  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:12.632959  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:13.129526  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:13.129549  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:13.129559  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:13.129567  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:13.131746  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:13.131764  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:13.131774  103175 round_trippers.go:580]     Audit-Id: 90c97052-e240-497f-bf55-226cd9260cc4
	I1221 18:24:13.131782  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:13.131791  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:13.131807  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:13.131817  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:13.131829  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:13 GMT
	I1221 18:24:13.131979  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:13.132314  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:13.629486  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:13.629516  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:13.629531  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:13.629537  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:13.631674  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:13.631698  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:13.631708  103175 round_trippers.go:580]     Audit-Id: 99f9c5ed-1bf5-43dd-bb00-aad2fab01cf9
	I1221 18:24:13.631716  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:13.631724  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:13.631731  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:13.631748  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:13.631757  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:13 GMT
	I1221 18:24:13.631876  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:14.130439  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:14.130460  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:14.130468  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:14.130474  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:14.132607  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:14.132625  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:14.132631  103175 round_trippers.go:580]     Audit-Id: 7a18b7fc-1f25-4524-812b-de77f8fc559e
	I1221 18:24:14.132636  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:14.132643  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:14.132648  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:14.132654  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:14.132661  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:14 GMT
	I1221 18:24:14.132815  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:14.630438  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:14.630463  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:14.630471  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:14.630477  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:14.632693  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:14.632713  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:14.632722  103175 round_trippers.go:580]     Audit-Id: 16ab8e15-0c2a-493a-81be-6c383d728c07
	I1221 18:24:14.632730  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:14.632737  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:14.632745  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:14.632756  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:14.632769  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:14 GMT
	I1221 18:24:14.632866  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:15.129436  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:15.129457  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:15.129465  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:15.129471  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:15.131578  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:15.131596  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:15.131605  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:15 GMT
	I1221 18:24:15.131613  103175 round_trippers.go:580]     Audit-Id: 634f9588-b103-4ad6-98fc-03dd4a08467c
	I1221 18:24:15.131619  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:15.131628  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:15.131635  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:15.131643  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:15.131821  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:15.630439  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:15.630460  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:15.630471  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:15.630478  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:15.632645  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:15.632666  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:15.632675  103175 round_trippers.go:580]     Audit-Id: a0292832-6cba-44a7-91a1-bffc579a0ba8
	I1221 18:24:15.632682  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:15.632690  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:15.632697  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:15.632705  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:15.632718  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:15 GMT
	I1221 18:24:15.632830  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:15.633152  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:16.130420  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:16.130438  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:16.130445  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:16.130451  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:16.132463  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:16.132481  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:16.132490  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:16.132497  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:16.132505  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:16.132512  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:16.132521  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:16 GMT
	I1221 18:24:16.132533  103175 round_trippers.go:580]     Audit-Id: 33fc348f-15ff-461a-8ffa-a064d3c52c43
	I1221 18:24:16.132674  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:16.630439  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:16.630457  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:16.630465  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:16.630471  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:16.632561  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:16.632580  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:16.632587  103175 round_trippers.go:580]     Audit-Id: 1c03db7c-9e07-4156-a4b1-f0d71a663531
	I1221 18:24:16.632592  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:16.632600  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:16.632607  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:16.632619  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:16.632628  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:16 GMT
	I1221 18:24:16.632753  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:17.130104  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:17.130125  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:17.130133  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:17.130140  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:17.132155  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:17.132172  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:17.132178  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:17.132184  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:17.132189  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:17.132200  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:17 GMT
	I1221 18:24:17.132210  103175 round_trippers.go:580]     Audit-Id: 02b253a1-03b7-4ea2-9826-1405d55d29c1
	I1221 18:24:17.132221  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:17.132378  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:17.629503  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:17.629530  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:17.629539  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:17.629545  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:17.631621  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:17.631638  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:17.631646  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:17.631654  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:17 GMT
	I1221 18:24:17.631662  103175 round_trippers.go:580]     Audit-Id: ffbe94b3-39dd-4267-b7d6-6570352194ef
	I1221 18:24:17.631671  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:17.631682  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:17.631690  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:17.631818  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:18.130450  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:18.130468  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:18.130475  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:18.130481  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:18.132567  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:18.132589  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:18.132598  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:18.132616  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:18.132624  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:18 GMT
	I1221 18:24:18.132632  103175 round_trippers.go:580]     Audit-Id: 433300fa-a2c6-4333-9091-05558f1cc0d0
	I1221 18:24:18.132641  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:18.132650  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:18.132785  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:18.133101  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:18.630367  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:18.630385  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:18.630393  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:18.630399  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:18.632566  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:18.632590  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:18.632600  103175 round_trippers.go:580]     Audit-Id: 9e82d650-4941-4c0e-9c9b-a083c4a34164
	I1221 18:24:18.632610  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:18.632619  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:18.632628  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:18.632638  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:18.632650  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:18 GMT
	I1221 18:24:18.632752  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:19.130187  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:19.130213  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:19.130226  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:19.130235  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:19.132367  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:19.132390  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:19.132399  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:19.132408  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:19.132416  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:19.132424  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:19 GMT
	I1221 18:24:19.132435  103175 round_trippers.go:580]     Audit-Id: 2d662a6a-6260-4f69-b7c9-557b5e8293e0
	I1221 18:24:19.132441  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:19.132586  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:19.630231  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:19.630254  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:19.630262  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:19.630268  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:19.632463  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:19.632488  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:19.632498  103175 round_trippers.go:580]     Audit-Id: 5011b2ec-069b-435a-bcb8-9c1d0e30a6d0
	I1221 18:24:19.632507  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:19.632514  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:19.632523  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:19.632531  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:19.632541  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:19 GMT
	I1221 18:24:19.632641  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:20.130256  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:20.130277  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:20.130285  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:20.130291  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:20.132492  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:20.132514  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:20.132521  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:20.132526  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:20.132533  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:20.132539  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:20.132547  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:20 GMT
	I1221 18:24:20.132563  103175 round_trippers.go:580]     Audit-Id: 738d4620-c4c1-478f-ba7c-4105b70d12b8
	I1221 18:24:20.132686  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:20.630269  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:20.630289  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:20.630297  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:20.630303  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:20.632542  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:20.632559  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:20.632565  103175 round_trippers.go:580]     Audit-Id: 686ccdfd-be4b-4f10-997c-4798d47acfa7
	I1221 18:24:20.632570  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:20.632575  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:20.632581  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:20.632586  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:20.632599  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:20 GMT
	I1221 18:24:20.632708  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:20.633040  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:21.130363  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:21.130382  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:21.130390  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:21.130396  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:21.132583  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:21.132607  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:21.132618  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:21.132626  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:21 GMT
	I1221 18:24:21.132638  103175 round_trippers.go:580]     Audit-Id: 7ce8f73d-ba62-422d-90a2-6c6afd321269
	I1221 18:24:21.132651  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:21.132661  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:21.132669  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:21.132782  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:21.630218  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:21.630245  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:21.630254  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:21.630261  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:21.632428  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:21.632455  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:21.632465  103175 round_trippers.go:580]     Audit-Id: 960c73fd-611d-4892-9c2f-c9778b28ab28
	I1221 18:24:21.632473  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:21.632480  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:21.632488  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:21.632508  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:21.632516  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:21 GMT
	I1221 18:24:21.632668  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:22.130313  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:22.130333  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:22.130341  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:22.130346  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:22.132463  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:22.132479  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:22.132486  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:22 GMT
	I1221 18:24:22.132491  103175 round_trippers.go:580]     Audit-Id: 913187f7-6ea1-49d8-bfd9-1795ae3d902d
	I1221 18:24:22.132496  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:22.132501  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:22.132509  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:22.132516  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:22.132648  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:22.630292  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:22.630314  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:22.630322  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:22.630328  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:22.632521  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:22.632544  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:22.632554  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:22.632560  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:22.632565  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:22.632572  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:22.632578  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:22 GMT
	I1221 18:24:22.632583  103175 round_trippers.go:580]     Audit-Id: 899fcf6d-4e41-4e09-bc94-52473bc3b6a8
	I1221 18:24:22.632681  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:23.129901  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:23.129922  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:23.129930  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:23.129936  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:23.132197  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:23.132217  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:23.132223  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:23.132232  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:23 GMT
	I1221 18:24:23.132240  103175 round_trippers.go:580]     Audit-Id: c3a17de9-2725-4716-91f5-c468c070db7d
	I1221 18:24:23.132249  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:23.132261  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:23.132270  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:23.132407  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:23.132831  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:23.629969  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:23.629989  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:23.629997  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:23.630003  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:23.632106  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:23.632130  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:23.632140  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:23.632148  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:23.632155  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:23.632163  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:23 GMT
	I1221 18:24:23.632172  103175 round_trippers.go:580]     Audit-Id: ce9bbc70-e9a1-4588-af58-f0eb69da07dc
	I1221 18:24:23.632181  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:23.632295  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:24.129866  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:24.129893  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:24.129901  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:24.129907  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:24.132134  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:24.132157  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:24.132164  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:24.132169  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:24.132174  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:24.132181  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:24.132190  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:24 GMT
	I1221 18:24:24.132201  103175 round_trippers.go:580]     Audit-Id: 3ff4ad1c-faf0-41be-877e-7dfb6d5acb02
	I1221 18:24:24.132324  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:24.629823  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:24.629845  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:24.629853  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:24.629859  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:24.632116  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:24.632137  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:24.632147  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:24.632154  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:24.632161  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:24.632173  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:24 GMT
	I1221 18:24:24.632187  103175 round_trippers.go:580]     Audit-Id: 82dc567d-59b0-4a80-8347-24c68348622c
	I1221 18:24:24.632197  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:24.632290  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:25.129887  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:25.129911  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:25.129919  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:25.129925  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:25.132274  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:25.132292  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:25.132298  103175 round_trippers.go:580]     Audit-Id: e0c8608f-ef93-4e96-b840-1b5ff2436aa1
	I1221 18:24:25.132304  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:25.132308  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:25.132313  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:25.132318  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:25.132324  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:25 GMT
	I1221 18:24:25.132463  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:25.630157  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:25.630179  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:25.630190  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:25.630197  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:25.632313  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:25.632339  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:25.632350  103175 round_trippers.go:580]     Audit-Id: ce65bf3a-2627-4571-8183-4b32b3160555
	I1221 18:24:25.632361  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:25.632374  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:25.632383  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:25.632389  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:25.632400  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:25 GMT
	I1221 18:24:25.632501  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:25.632788  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:26.130089  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:26.130109  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:26.130117  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:26.130122  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:26.132302  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:26.132324  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:26.132334  103175 round_trippers.go:580]     Audit-Id: 5f9e09e9-5ba2-4928-990f-c4cf5261f7a8
	I1221 18:24:26.132341  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:26.132348  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:26.132356  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:26.132364  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:26.132377  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:26 GMT
	I1221 18:24:26.132479  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:26.630415  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:26.630437  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:26.630446  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:26.630452  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:26.632413  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:26.632431  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:26.632437  103175 round_trippers.go:580]     Audit-Id: 61fda723-01c1-4c82-bf91-261e044e3c7a
	I1221 18:24:26.632443  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:26.632448  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:26.632452  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:26.632457  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:26.632462  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:26 GMT
	I1221 18:24:26.632576  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:27.130265  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:27.130285  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:27.130293  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:27.130299  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:27.132375  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:27.132395  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:27.132405  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:27.132413  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:27 GMT
	I1221 18:24:27.132421  103175 round_trippers.go:580]     Audit-Id: 47c38fcd-9db5-4dc8-9b47-a8ec13e5c624
	I1221 18:24:27.132429  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:27.132436  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:27.132444  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:27.132547  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:27.630129  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:27.630150  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:27.630158  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:27.630164  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:27.632262  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:27.632286  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:27.632296  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:27.632302  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:27.632307  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:27.632313  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:27 GMT
	I1221 18:24:27.632325  103175 round_trippers.go:580]     Audit-Id: a6fe0478-1147-492a-8a9f-b1dd4415e17a
	I1221 18:24:27.632330  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:27.632508  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:28.130264  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:28.130283  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:28.130290  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:28.130296  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:28.132328  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:28.132350  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:28.132362  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:28.132370  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:28.132383  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:28.132398  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:28 GMT
	I1221 18:24:28.132405  103175 round_trippers.go:580]     Audit-Id: 4c1f7129-15c1-4870-9d01-4c31dcf3f775
	I1221 18:24:28.132413  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:28.132521  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:28.132851  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:28.630117  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:28.630135  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:28.630142  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:28.630148  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:28.632331  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:28.632354  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:28.632363  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:28 GMT
	I1221 18:24:28.632370  103175 round_trippers.go:580]     Audit-Id: d76910c7-0778-4679-97bd-6ae011fd19a7
	I1221 18:24:28.632378  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:28.632385  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:28.632393  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:28.632402  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:28.632574  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:29.130226  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:29.130248  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:29.130256  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:29.130262  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:29.132631  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:29.132651  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:29.132658  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:29.132663  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:29.132668  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:29.132673  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:29 GMT
	I1221 18:24:29.132678  103175 round_trippers.go:580]     Audit-Id: a631379b-1533-41e1-ac1d-9f7acb6b4bf9
	I1221 18:24:29.132685  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:29.132824  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:29.630448  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:29.630468  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:29.630476  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:29.630482  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:29.632555  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:29.632573  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:29.632580  103175 round_trippers.go:580]     Audit-Id: c382f655-7380-4a9d-8a8f-f1bd795e0171
	I1221 18:24:29.632585  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:29.632590  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:29.632595  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:29.632601  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:29.632610  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:29 GMT
	I1221 18:24:29.632717  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:30.130299  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:30.130320  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:30.130328  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:30.130334  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:30.132417  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:30.132437  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:30.132446  103175 round_trippers.go:580]     Audit-Id: 7219a509-cb4d-41ef-a428-36cebf155862
	I1221 18:24:30.132453  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:30.132461  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:30.132475  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:30.132490  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:30.132499  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:30 GMT
	I1221 18:24:30.132648  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:30.132964  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:30.630298  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:30.630322  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:30.630334  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:30.630344  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:30.632581  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:30.632614  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:30.632624  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:30.632632  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:30.632641  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:30 GMT
	I1221 18:24:30.632650  103175 round_trippers.go:580]     Audit-Id: 72ee3a43-28a5-45ae-a1a8-8dc3d830779e
	I1221 18:24:30.632658  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:30.632665  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:30.632757  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:31.130398  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:31.130422  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:31.130430  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:31.130437  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:31.132460  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:31.132477  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:31.132483  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:31.132488  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:31.132494  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:31 GMT
	I1221 18:24:31.132499  103175 round_trippers.go:580]     Audit-Id: 45626f0a-471e-48f6-a659-10bac6a1b757
	I1221 18:24:31.132507  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:31.132514  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:31.132656  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:31.630317  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:31.630341  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:31.630355  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:31.630361  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:31.632494  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:31.632514  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:31.632521  103175 round_trippers.go:580]     Audit-Id: 8640b4f7-0282-4f85-a05b-8b62813b5219
	I1221 18:24:31.632528  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:31.632536  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:31.632547  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:31.632559  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:31.632566  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:31 GMT
	I1221 18:24:31.632687  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:32.130365  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:32.130393  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:32.130415  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:32.130425  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:32.132684  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:32.132711  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:32.132720  103175 round_trippers.go:580]     Audit-Id: 1524d638-3738-4d39-81bf-428fdc725e0a
	I1221 18:24:32.132728  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:32.132737  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:32.132748  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:32.132756  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:32.132765  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:32 GMT
	I1221 18:24:32.132874  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:32.133206  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:32.630442  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:32.630462  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:32.630470  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:32.630476  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:32.632696  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:32.632717  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:32.632726  103175 round_trippers.go:580]     Audit-Id: 73d78294-c152-496c-8435-bab6765b7b94
	I1221 18:24:32.632734  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:32.632744  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:32.632751  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:32.632760  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:32.632772  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:32 GMT
	I1221 18:24:32.632921  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:33.130443  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:33.130465  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:33.130473  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:33.130480  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:33.132824  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:33.132846  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:33.132853  103175 round_trippers.go:580]     Audit-Id: b50d8118-e8d6-41e7-a1af-4fbdfe7dd1ba
	I1221 18:24:33.132861  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:33.132869  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:33.132878  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:33.132895  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:33.132913  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:33 GMT
	I1221 18:24:33.133028  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:33.630448  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:33.630471  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:33.630478  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:33.630485  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:33.632639  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:33.632663  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:33.632671  103175 round_trippers.go:580]     Audit-Id: b3e87ec2-72f7-474e-a6e1-3e8a1bcc9e11
	I1221 18:24:33.632679  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:33.632688  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:33.632701  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:33.632709  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:33.632714  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:33 GMT
	I1221 18:24:33.632844  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:34.130505  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:34.130530  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:34.130543  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:34.130552  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:34.132718  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:34.132736  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:34.132742  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:34.132747  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:34.132752  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:34.132757  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:34.132762  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:34 GMT
	I1221 18:24:34.132767  103175 round_trippers.go:580]     Audit-Id: 3206d310-b9f1-4ba1-a92e-f5f626ea410c
	I1221 18:24:34.132914  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:34.133334  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:34.630492  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:34.630512  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:34.630520  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:34.630526  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:34.632853  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:34.632872  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:34.632879  103175 round_trippers.go:580]     Audit-Id: 194c0643-5b3e-449e-8f2c-4e73e8cf715d
	I1221 18:24:34.632884  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:34.632889  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:34.632894  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:34.632935  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:34.632946  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:34 GMT
	I1221 18:24:34.633058  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:35.130469  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:35.130494  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:35.130506  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:35.130516  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:35.132579  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:35.132602  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:35.132613  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:35.132619  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:35.132624  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:35.132630  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:35.132636  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:35 GMT
	I1221 18:24:35.132641  103175 round_trippers.go:580]     Audit-Id: 4733387e-2325-4a15-a914-b43c91f37e54
	I1221 18:24:35.132761  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:35.630247  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:35.630276  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:35.630288  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:35.630299  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:35.632422  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:35.632444  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:35.632455  103175 round_trippers.go:580]     Audit-Id: 2e6f8067-d1a6-40af-9be1-339d7f10293e
	I1221 18:24:35.632464  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:35.632474  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:35.632487  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:35.632499  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:35.632508  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:35 GMT
	I1221 18:24:35.632611  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:36.130203  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:36.130230  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:36.130241  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:36.130251  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:36.132593  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:36.132622  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:36.132632  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:36 GMT
	I1221 18:24:36.132638  103175 round_trippers.go:580]     Audit-Id: 1fe68709-a6f8-4e56-b847-f75100d922b0
	I1221 18:24:36.132643  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:36.132648  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:36.132653  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:36.132659  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:36.132761  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:36.629440  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:36.629459  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:36.629467  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:36.629473  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:36.631574  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:36.631591  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:36.631598  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:36.631603  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:36.631608  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:36.631613  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:36 GMT
	I1221 18:24:36.631618  103175 round_trippers.go:580]     Audit-Id: 02bfedbc-a650-44af-b26b-755a693fed8a
	I1221 18:24:36.631623  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:36.631779  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:36.632173  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:37.129554  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:37.129576  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:37.129587  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:37.129598  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:37.131786  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:37.131805  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:37.131813  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:37.131818  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:37 GMT
	I1221 18:24:37.131823  103175 round_trippers.go:580]     Audit-Id: b139eea4-1946-4707-bc7f-abedf34b217b
	I1221 18:24:37.131828  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:37.131835  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:37.131840  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:37.131948  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:37.630438  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:37.630461  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:37.630469  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:37.630475  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:37.632434  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:37.632456  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:37.632466  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:37.632475  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:37.632482  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:37.632493  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:37 GMT
	I1221 18:24:37.632501  103175 round_trippers.go:580]     Audit-Id: 706cef8d-e65f-445a-9f0f-cdc0f5a6593b
	I1221 18:24:37.632512  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:37.632626  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:38.130389  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:38.130407  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:38.130422  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:38.130428  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:38.132549  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:38.132570  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:38.132578  103175 round_trippers.go:580]     Audit-Id: 70ebeec5-caa3-4e0f-b99d-a582ae228013
	I1221 18:24:38.132586  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:38.132593  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:38.132600  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:38.132607  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:38.132617  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:38 GMT
	I1221 18:24:38.132762  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:38.630415  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:38.630437  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:38.630447  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:38.630456  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:38.632533  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:38.632556  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:38.632566  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:38 GMT
	I1221 18:24:38.632575  103175 round_trippers.go:580]     Audit-Id: bbd7b4b8-1572-4e90-96a0-433ed7acdcae
	I1221 18:24:38.632584  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:38.632593  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:38.632612  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:38.632626  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:38.632721  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:38.633096  103175 node_ready.go:58] node "multinode-186629" has status "Ready":"False"
	I1221 18:24:39.130325  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:39.130342  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:39.130350  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:39.130356  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:39.132481  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:39.132498  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:39.132505  103175 round_trippers.go:580]     Audit-Id: 189fa8c6-37cf-4be8-9d4d-d17264a38ce3
	I1221 18:24:39.132510  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:39.132515  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:39.132520  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:39.132525  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:39.132530  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:39 GMT
	I1221 18:24:39.132624  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"299","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1221 18:24:39.630282  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:39.630302  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:39.630310  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:39.630315  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:39.632464  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:39.632484  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:39.632495  103175 round_trippers.go:580]     Audit-Id: b7917c9f-7d17-409e-a5f6-ed9594b0c39b
	I1221 18:24:39.632504  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:39.632512  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:39.632521  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:39.632530  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:39.632541  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:39 GMT
	I1221 18:24:39.632677  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:39.633012  103175 node_ready.go:49] node "multinode-186629" has status "Ready":"True"
	I1221 18:24:39.633028  103175 node_ready.go:38] duration metric: took 30.50373388s waiting for node "multinode-186629" to be "Ready" ...
	I1221 18:24:39.633036  103175 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:24:39.633105  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:39.633114  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:39.633121  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:39.633127  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:39.635851  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:39.635877  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:39.635887  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:39.635897  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:39.635904  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:39.635918  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:39 GMT
	I1221 18:24:39.635933  103175 round_trippers.go:580]     Audit-Id: 2280dff3-ea6e-4cc2-b67e-d50b5bfa7c71
	I1221 18:24:39.635942  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:39.636421  103175 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"396","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1221 18:24:39.639294  103175 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:39.639362  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rzjlp
	I1221 18:24:39.639370  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:39.639377  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:39.639383  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:39.641177  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:39.641191  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:39.641197  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:39.641202  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:39.641207  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:39 GMT
	I1221 18:24:39.641212  103175 round_trippers.go:580]     Audit-Id: e62ec74a-6ced-4b5d-afa0-f5481f64d89a
	I1221 18:24:39.641220  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:39.641228  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:39.641359  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"396","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1221 18:24:39.641801  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:39.641817  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:39.641824  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:39.641831  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:39.643382  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:39.643395  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:39.643401  103175 round_trippers.go:580]     Audit-Id: 7feb1567-d64c-44b8-921c-8f534502cc02
	I1221 18:24:39.643407  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:39.643412  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:39.643417  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:39.643422  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:39.643427  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:39 GMT
	I1221 18:24:39.643576  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.140359  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rzjlp
	I1221 18:24:40.140381  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.140389  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.140395  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.142794  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:40.142818  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.142825  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.142831  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.142836  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.142841  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.142846  103175 round_trippers.go:580]     Audit-Id: 0e6fb328-28f9-4c35-a5b5-f4bea7c7a5f5
	I1221 18:24:40.142854  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.142989  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"396","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1221 18:24:40.143557  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.143576  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.143587  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.143605  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.145532  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.145553  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.145562  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.145571  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.145583  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.145595  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.145603  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.145614  103175 round_trippers.go:580]     Audit-Id: ead14fd0-eb3c-47c9-babd-b0ca408673df
	I1221 18:24:40.145726  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.640382  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rzjlp
	I1221 18:24:40.640408  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.640420  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.640432  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.642778  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:40.642798  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.642804  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.642809  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.642815  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.642819  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.642824  103175 round_trippers.go:580]     Audit-Id: fecd8645-5b09-4324-8663-d5f66609e7c5
	I1221 18:24:40.642829  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.642931  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"409","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1221 18:24:40.643448  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.643465  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.643474  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.643483  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.645300  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.645320  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.645329  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.645338  103175 round_trippers.go:580]     Audit-Id: 44147be3-9f98-4fc6-b4fa-42b54fdd3ab8
	I1221 18:24:40.645347  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.645356  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.645368  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.645377  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.645498  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.645783  103175 pod_ready.go:92] pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:40.645799  103175 pod_ready.go:81] duration metric: took 1.00648561s waiting for pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.645808  103175 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.645852  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-186629
	I1221 18:24:40.645860  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.645867  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.645872  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.647475  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.647489  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.647495  103175 round_trippers.go:580]     Audit-Id: cbf13d55-c37c-46af-b46f-5c428f249600
	I1221 18:24:40.647501  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.647506  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.647511  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.647516  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.647521  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.647650  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-186629","namespace":"kube-system","uid":"050d0edc-924f-43b4-ae37-c41be4b23abe","resourceVersion":"282","creationTimestamp":"2023-12-21T18:23:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8fc4208dd16cebfa046486404c6879d3","kubernetes.io/config.mirror":"8fc4208dd16cebfa046486404c6879d3","kubernetes.io/config.seen":"2023-12-21T18:23:49.133937645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1221 18:24:40.648046  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.648060  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.648071  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.648081  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.649678  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.649696  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.649705  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.649712  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.649719  103175 round_trippers.go:580]     Audit-Id: c5ebcd42-c184-43f5-9483-1c541e9ec209
	I1221 18:24:40.649730  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.649741  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.649749  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.649883  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.650131  103175 pod_ready.go:92] pod "etcd-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:40.650144  103175 pod_ready.go:81] duration metric: took 4.327924ms waiting for pod "etcd-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.650153  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.650193  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-186629
	I1221 18:24:40.650200  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.650206  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.650213  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.651968  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.651986  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.651996  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.652005  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.652014  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.652029  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.652037  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.652048  103175 round_trippers.go:580]     Audit-Id: c8339a52-93e3-4cbb-a8cf-ca08432daeb8
	I1221 18:24:40.652162  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-186629","namespace":"kube-system","uid":"494ef2df-db06-45ea-89d9-d277b1915b9b","resourceVersion":"280","creationTimestamp":"2023-12-21T18:23:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"546ef8ac4384911117f3b86602f32ae5","kubernetes.io/config.mirror":"546ef8ac4384911117f3b86602f32ae5","kubernetes.io/config.seen":"2023-12-21T18:23:49.133939564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1221 18:24:40.652516  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.652528  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.652535  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.652541  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.654010  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.654024  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.654030  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.654035  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.654040  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.654047  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.654056  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.654063  103175 round_trippers.go:580]     Audit-Id: 0c9076d6-dd7a-4a0f-bc8a-7a63c19fe41e
	I1221 18:24:40.654210  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.654486  103175 pod_ready.go:92] pod "kube-apiserver-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:40.654500  103175 pod_ready.go:81] duration metric: took 4.341073ms waiting for pod "kube-apiserver-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.654507  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.654543  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-186629
	I1221 18:24:40.654551  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.654557  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.654566  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.656045  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.656063  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.656068  103175 round_trippers.go:580]     Audit-Id: 7791467e-7100-4c73-94ed-3cea5ab86bc0
	I1221 18:24:40.656074  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.656079  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.656084  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.656090  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.656097  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.656305  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-186629","namespace":"kube-system","uid":"327f27d3-7657-4072-a08d-b5ee04f8c570","resourceVersion":"274","creationTimestamp":"2023-12-21T18:23:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"934cc45a1b5ba86939f57849c5f23ab8","kubernetes.io/config.mirror":"934cc45a1b5ba86939f57849c5f23ab8","kubernetes.io/config.seen":"2023-12-21T18:23:54.990477588Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1221 18:24:40.656702  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.656716  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.656722  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.656729  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.658227  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.658243  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.658251  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.658258  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.658266  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.658274  103175 round_trippers.go:580]     Audit-Id: a1bb0229-4023-413f-a4b0-4fc799b3a9bf
	I1221 18:24:40.658286  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.658296  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.658435  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.658691  103175 pod_ready.go:92] pod "kube-controller-manager-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:40.658704  103175 pod_ready.go:81] duration metric: took 4.191714ms waiting for pod "kube-controller-manager-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.658712  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sq9cp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.658750  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sq9cp
	I1221 18:24:40.658757  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.658764  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.658770  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.660301  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:40.660316  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.660324  103175 round_trippers.go:580]     Audit-Id: 84811329-205c-43ac-866c-b0df259f27a8
	I1221 18:24:40.660332  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.660339  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.660348  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.660358  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.660370  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.660488  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sq9cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"74302016-3be7-43b4-9909-8a256ce497b6","resourceVersion":"372","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"56989027-1f83-41ed-9e39-108798d50da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56989027-1f83-41ed-9e39-108798d50da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1221 18:24:40.831152  103175 request.go:629] Waited for 170.324439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.831201  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:40.831215  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:40.831223  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:40.831232  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:40.833456  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:40.833477  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:40.833486  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:40.833494  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:40 GMT
	I1221 18:24:40.833501  103175 round_trippers.go:580]     Audit-Id: abaab67b-acfd-4b97-9bec-bc26f4ca0c15
	I1221 18:24:40.833508  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:40.833516  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:40.833537  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:40.833720  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:40.834016  103175 pod_ready.go:92] pod "kube-proxy-sq9cp" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:40.834030  103175 pod_ready.go:81] duration metric: took 175.313781ms waiting for pod "kube-proxy-sq9cp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:40.834039  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:41.030369  103175 request.go:629] Waited for 196.277039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-186629
	I1221 18:24:41.030432  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-186629
	I1221 18:24:41.030445  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.030452  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.030458  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.032457  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:41.032473  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.032480  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.032485  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.032490  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.032495  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.032500  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.032505  103175 round_trippers.go:580]     Audit-Id: 15c16767-0609-48dc-a9b0-f74ce4693dff
	I1221 18:24:41.032613  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-186629","namespace":"kube-system","uid":"71e349f9-0a8d-43da-918d-917bbe11b7b1","resourceVersion":"281","creationTimestamp":"2023-12-21T18:23:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cff871542a279c784fc3936f791b252","kubernetes.io/config.mirror":"4cff871542a279c784fc3936f791b252","kubernetes.io/config.seen":"2023-12-21T18:23:49.133933095Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1221 18:24:41.230943  103175 request.go:629] Waited for 197.962352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:41.231010  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:41.231015  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.231022  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.231029  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.233151  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:41.233167  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.233173  103175 round_trippers.go:580]     Audit-Id: 8a402bc1-6b0e-4aa4-b804-e2f40382f376
	I1221 18:24:41.233179  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.233187  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.233195  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.233205  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.233213  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.233358  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:41.233653  103175 pod_ready.go:92] pod "kube-scheduler-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:41.233668  103175 pod_ready.go:81] duration metric: took 399.622941ms waiting for pod "kube-scheduler-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:41.233677  103175 pod_ready.go:38] duration metric: took 1.600629193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:24:41.233690  103175 api_server.go:52] waiting for apiserver process to appear ...
	I1221 18:24:41.233739  103175 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:24:41.242817  103175 command_runner.go:130] > 1409
	I1221 18:24:41.243565  103175 api_server.go:72] duration metric: took 32.756249131s to wait for apiserver process to appear ...
	I1221 18:24:41.243584  103175 api_server.go:88] waiting for apiserver healthz status ...
	I1221 18:24:41.243602  103175 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1221 18:24:41.247457  103175 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1221 18:24:41.247517  103175 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1221 18:24:41.247526  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.247534  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.247540  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.248358  103175 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1221 18:24:41.248378  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.248384  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.248389  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.248396  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.248405  103175 round_trippers.go:580]     Content-Length: 264
	I1221 18:24:41.248410  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.248416  103175 round_trippers.go:580]     Audit-Id: 72118ec2-994d-4787-bc36-cc9a82c2de32
	I1221 18:24:41.248423  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.248437  103175 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1221 18:24:41.248493  103175 api_server.go:141] control plane version: v1.28.4
	I1221 18:24:41.248507  103175 api_server.go:131] duration metric: took 4.917753ms to wait for apiserver health ...
	I1221 18:24:41.248514  103175 system_pods.go:43] waiting for kube-system pods to appear ...
	I1221 18:24:41.430895  103175 request.go:629] Waited for 182.316217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:41.430941  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:41.430946  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.430955  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.430964  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.434007  103175 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1221 18:24:41.434033  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.434044  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.434052  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.434063  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.434075  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.434087  103175 round_trippers.go:580]     Audit-Id: cf4b84b8-bc29-4cdb-9aa7-8543d7f5d19f
	I1221 18:24:41.434096  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.434563  103175 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"409","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1221 18:24:41.436215  103175 system_pods.go:59] 8 kube-system pods found
	I1221 18:24:41.436259  103175 system_pods.go:61] "coredns-5dd5756b68-rzjlp" [49af9dec-b485-4ae7-b65f-f9ae56b041de] Running
	I1221 18:24:41.436269  103175 system_pods.go:61] "etcd-multinode-186629" [050d0edc-924f-43b4-ae37-c41be4b23abe] Running
	I1221 18:24:41.436273  103175 system_pods.go:61] "kindnet-w2nh9" [731e5a37-9d18-4cee-b269-127e4ad9c8cf] Running
	I1221 18:24:41.436278  103175 system_pods.go:61] "kube-apiserver-multinode-186629" [494ef2df-db06-45ea-89d9-d277b1915b9b] Running
	I1221 18:24:41.436284  103175 system_pods.go:61] "kube-controller-manager-multinode-186629" [327f27d3-7657-4072-a08d-b5ee04f8c570] Running
	I1221 18:24:41.436291  103175 system_pods.go:61] "kube-proxy-sq9cp" [74302016-3be7-43b4-9909-8a256ce497b6] Running
	I1221 18:24:41.436295  103175 system_pods.go:61] "kube-scheduler-multinode-186629" [71e349f9-0a8d-43da-918d-917bbe11b7b1] Running
	I1221 18:24:41.436301  103175 system_pods.go:61] "storage-provisioner" [e410c9c3-aca6-4eb6-9186-d00fa92f6cb0] Running
	I1221 18:24:41.436308  103175 system_pods.go:74] duration metric: took 187.78931ms to wait for pod list to return data ...
	I1221 18:24:41.436317  103175 default_sa.go:34] waiting for default service account to be created ...
	I1221 18:24:41.630731  103175 request.go:629] Waited for 194.348562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1221 18:24:41.630788  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1221 18:24:41.630793  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.630800  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.630817  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.632884  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:41.632909  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.632915  103175 round_trippers.go:580]     Audit-Id: 0807569c-5b78-4714-9428-bf00955de2d8
	I1221 18:24:41.632921  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.632926  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.632931  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.632936  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.632942  103175 round_trippers.go:580]     Content-Length: 261
	I1221 18:24:41.632959  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.632981  103175 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"47e48c2e-77c3-4cae-8289-8473f4b276d0","resourceVersion":"308","creationTimestamp":"2023-12-21T18:24:07Z"}}]}
	I1221 18:24:41.633165  103175 default_sa.go:45] found service account: "default"
	I1221 18:24:41.633179  103175 default_sa.go:55] duration metric: took 196.85556ms for default service account to be created ...
	I1221 18:24:41.633187  103175 system_pods.go:116] waiting for k8s-apps to be running ...
	I1221 18:24:41.830599  103175 request.go:629] Waited for 197.360145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:41.830678  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:41.830689  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:41.830697  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:41.830703  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:41.835262  103175 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1221 18:24:41.835288  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:41.835299  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:41.835307  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:41.835313  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:41 GMT
	I1221 18:24:41.835319  103175 round_trippers.go:580]     Audit-Id: c0263c3a-3610-4de8-ae75-a44dc30c5540
	I1221 18:24:41.835324  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:41.835330  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:41.835701  103175 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"409","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1221 18:24:41.837353  103175 system_pods.go:86] 8 kube-system pods found
	I1221 18:24:41.837371  103175 system_pods.go:89] "coredns-5dd5756b68-rzjlp" [49af9dec-b485-4ae7-b65f-f9ae56b041de] Running
	I1221 18:24:41.837376  103175 system_pods.go:89] "etcd-multinode-186629" [050d0edc-924f-43b4-ae37-c41be4b23abe] Running
	I1221 18:24:41.837381  103175 system_pods.go:89] "kindnet-w2nh9" [731e5a37-9d18-4cee-b269-127e4ad9c8cf] Running
	I1221 18:24:41.837385  103175 system_pods.go:89] "kube-apiserver-multinode-186629" [494ef2df-db06-45ea-89d9-d277b1915b9b] Running
	I1221 18:24:41.837389  103175 system_pods.go:89] "kube-controller-manager-multinode-186629" [327f27d3-7657-4072-a08d-b5ee04f8c570] Running
	I1221 18:24:41.837393  103175 system_pods.go:89] "kube-proxy-sq9cp" [74302016-3be7-43b4-9909-8a256ce497b6] Running
	I1221 18:24:41.837397  103175 system_pods.go:89] "kube-scheduler-multinode-186629" [71e349f9-0a8d-43da-918d-917bbe11b7b1] Running
	I1221 18:24:41.837402  103175 system_pods.go:89] "storage-provisioner" [e410c9c3-aca6-4eb6-9186-d00fa92f6cb0] Running
	I1221 18:24:41.837412  103175 system_pods.go:126] duration metric: took 204.220527ms to wait for k8s-apps to be running ...
	I1221 18:24:41.837419  103175 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:24:41.837461  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:24:41.847745  103175 system_svc.go:56] duration metric: took 10.318088ms WaitForService to wait for kubelet.
	I1221 18:24:41.847771  103175 kubeadm.go:581] duration metric: took 33.360456582s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:24:41.847795  103175 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:24:42.031201  103175 request.go:629] Waited for 183.335557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1221 18:24:42.031252  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1221 18:24:42.031269  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:42.031283  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:42.031293  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:42.033126  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:42.033144  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:42.033150  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:42.033155  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:42.033164  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:42.033172  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:42 GMT
	I1221 18:24:42.033192  103175 round_trippers.go:580]     Audit-Id: f08b78ea-b7d7-46b0-9671-8f0b433cc578
	I1221 18:24:42.033202  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:42.033354  103175 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1221 18:24:42.033722  103175 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 18:24:42.033741  103175 node_conditions.go:123] node cpu capacity is 8
	I1221 18:24:42.033752  103175 node_conditions.go:105] duration metric: took 185.948847ms to run NodePressure ...
	I1221 18:24:42.033762  103175 start.go:228] waiting for startup goroutines ...
	I1221 18:24:42.033773  103175 start.go:233] waiting for cluster config update ...
	I1221 18:24:42.033782  103175 start.go:242] writing updated cluster config ...
	I1221 18:24:42.035808  103175 out.go:177] 
	I1221 18:24:42.037468  103175 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:24:42.037530  103175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json ...
	I1221 18:24:42.039310  103175 out.go:177] * Starting worker node multinode-186629-m02 in cluster multinode-186629
	I1221 18:24:42.041003  103175 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:24:42.042313  103175 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:24:42.043552  103175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:24:42.043565  103175 cache.go:56] Caching tarball of preloaded images
	I1221 18:24:42.043629  103175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:24:42.043659  103175 preload.go:174] Found /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1221 18:24:42.043673  103175 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1221 18:24:42.043773  103175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json ...
	I1221 18:24:42.058647  103175 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:24:42.058666  103175 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1221 18:24:42.058676  103175 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:24:42.058701  103175 start.go:365] acquiring machines lock for multinode-186629-m02: {Name:mkca33a8dd351a26ac40adc4719945b7bfc2fcb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:24:42.058823  103175 start.go:369] acquired machines lock for "multinode-186629-m02" in 97.835µs
	I1221 18:24:42.058850  103175 start.go:93] Provisioning new machine with config: &{Name:multinode-186629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1221 18:24:42.058929  103175 start.go:125] createHost starting for "m02" (driver="docker")
	I1221 18:24:42.060756  103175 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1221 18:24:42.060835  103175 start.go:159] libmachine.API.Create for "multinode-186629" (driver="docker")
	I1221 18:24:42.060859  103175 client.go:168] LocalClient.Create starting
	I1221 18:24:42.060934  103175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem
	I1221 18:24:42.060960  103175 main.go:141] libmachine: Decoding PEM data...
	I1221 18:24:42.060973  103175 main.go:141] libmachine: Parsing certificate...
	I1221 18:24:42.061020  103175 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem
	I1221 18:24:42.061049  103175 main.go:141] libmachine: Decoding PEM data...
	I1221 18:24:42.061059  103175 main.go:141] libmachine: Parsing certificate...
	I1221 18:24:42.061246  103175 cli_runner.go:164] Run: docker network inspect multinode-186629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:24:42.075655  103175 network_create.go:77] Found existing network {name:multinode-186629 subnet:0xc002e15650 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1221 18:24:42.075685  103175 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-186629-m02" container
	I1221 18:24:42.075738  103175 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1221 18:24:42.090512  103175 cli_runner.go:164] Run: docker volume create multinode-186629-m02 --label name.minikube.sigs.k8s.io=multinode-186629-m02 --label created_by.minikube.sigs.k8s.io=true
	I1221 18:24:42.105646  103175 oci.go:103] Successfully created a docker volume multinode-186629-m02
	I1221 18:24:42.105714  103175 cli_runner.go:164] Run: docker run --rm --name multinode-186629-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-186629-m02 --entrypoint /usr/bin/test -v multinode-186629-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1221 18:24:42.607868  103175 oci.go:107] Successfully prepared a docker volume multinode-186629-m02
	I1221 18:24:42.607922  103175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:24:42.607941  103175 kic.go:194] Starting extracting preloaded images to volume ...
	I1221 18:24:42.607994  103175 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-186629-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1221 18:24:47.631511  103175 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-186629-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.023479659s)
	I1221 18:24:47.631541  103175 kic.go:203] duration metric: took 5.023597 seconds to extract preloaded images to volume
	W1221 18:24:47.631646  103175 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1221 18:24:47.631726  103175 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1221 18:24:47.679432  103175 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-186629-m02 --name multinode-186629-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-186629-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-186629-m02 --network multinode-186629 --ip 192.168.58.3 --volume multinode-186629-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:24:47.962313  103175 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Running}}
	I1221 18:24:47.979466  103175 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Status}}
	I1221 18:24:47.995841  103175 cli_runner.go:164] Run: docker exec multinode-186629-m02 stat /var/lib/dpkg/alternatives/iptables
	I1221 18:24:48.062439  103175 oci.go:144] the created container "multinode-186629-m02" has a running status.
	I1221 18:24:48.062478  103175 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa...
	I1221 18:24:48.277128  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1221 18:24:48.277170  103175 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1221 18:24:48.300044  103175 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Status}}
	I1221 18:24:48.318005  103175 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1221 18:24:48.318030  103175 kic_runner.go:114] Args: [docker exec --privileged multinode-186629-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1221 18:24:48.400092  103175 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Status}}
	I1221 18:24:48.416406  103175 machine.go:88] provisioning docker machine ...
	I1221 18:24:48.416436  103175 ubuntu.go:169] provisioning hostname "multinode-186629-m02"
	I1221 18:24:48.416484  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:48.434580  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:24:48.434982  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1221 18:24:48.434995  103175 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-186629-m02 && echo "multinode-186629-m02" | sudo tee /etc/hostname
	I1221 18:24:48.596326  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-186629-m02
	
	I1221 18:24:48.596422  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:48.616237  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:24:48.616796  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1221 18:24:48.616827  103175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-186629-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-186629-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-186629-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:24:48.732638  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:24:48.732669  103175 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:24:48.732687  103175 ubuntu.go:177] setting up certificates
	I1221 18:24:48.732702  103175 provision.go:83] configureAuth start
	I1221 18:24:48.732757  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629-m02
	I1221 18:24:48.748696  103175 provision.go:138] copyHostCerts
	I1221 18:24:48.748726  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:24:48.748751  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem, removing ...
	I1221 18:24:48.748760  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:24:48.748814  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:24:48.748881  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:24:48.748899  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem, removing ...
	I1221 18:24:48.748903  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:24:48.748927  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:24:48.748981  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:24:48.749001  103175 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem, removing ...
	I1221 18:24:48.749007  103175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:24:48.749027  103175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:24:48.749081  103175 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.multinode-186629-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-186629-m02]
	I1221 18:24:48.846622  103175 provision.go:172] copyRemoteCerts
	I1221 18:24:48.846691  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:24:48.846733  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:48.862592  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:24:48.945527  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1221 18:24:48.945585  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:24:48.966099  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1221 18:24:48.966181  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:24:48.985697  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1221 18:24:48.985742  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1221 18:24:49.005372  103175 provision.go:86] duration metric: configureAuth took 272.656157ms
	I1221 18:24:49.005408  103175 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:24:49.005565  103175 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:24:49.005680  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:49.021746  103175 main.go:141] libmachine: Using SSH client type: native
	I1221 18:24:49.022083  103175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32854 <nil> <nil>}
	I1221 18:24:49.022101  103175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:24:49.212718  103175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:24:49.212739  103175 machine.go:91] provisioned docker machine in 796.314595ms
	I1221 18:24:49.212748  103175 client.go:171] LocalClient.Create took 7.151883352s
	I1221 18:24:49.212761  103175 start.go:167] duration metric: libmachine.API.Create for "multinode-186629" took 7.151925413s
	I1221 18:24:49.212770  103175 start.go:300] post-start starting for "multinode-186629-m02" (driver="docker")
	I1221 18:24:49.212781  103175 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:24:49.212849  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:24:49.212890  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:49.228911  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:24:49.312847  103175 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:24:49.315331  103175 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1221 18:24:49.315352  103175 command_runner.go:130] > NAME="Ubuntu"
	I1221 18:24:49.315361  103175 command_runner.go:130] > VERSION_ID="22.04"
	I1221 18:24:49.315370  103175 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1221 18:24:49.315381  103175 command_runner.go:130] > VERSION_CODENAME=jammy
	I1221 18:24:49.315391  103175 command_runner.go:130] > ID=ubuntu
	I1221 18:24:49.315398  103175 command_runner.go:130] > ID_LIKE=debian
	I1221 18:24:49.315404  103175 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1221 18:24:49.315409  103175 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1221 18:24:49.315420  103175 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1221 18:24:49.315428  103175 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1221 18:24:49.315434  103175 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1221 18:24:49.315484  103175 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:24:49.315506  103175 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:24:49.315515  103175 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:24:49.315524  103175 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1221 18:24:49.315535  103175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:24:49.315577  103175 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:24:49.315643  103175 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> 166642.pem in /etc/ssl/certs
	I1221 18:24:49.315651  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /etc/ssl/certs/166642.pem
	I1221 18:24:49.315723  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:24:49.322939  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:24:49.343044  103175 start.go:303] post-start completed in 130.26138ms
	I1221 18:24:49.343354  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629-m02
	I1221 18:24:49.359243  103175 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/config.json ...
	I1221 18:24:49.359457  103175 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:24:49.359493  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:49.374414  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:24:49.453562  103175 command_runner.go:130] > 26%!
	(MISSING)I1221 18:24:49.453641  103175 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:24:49.457434  103175 command_runner.go:130] > 217G
	I1221 18:24:49.457464  103175 start.go:128] duration metric: createHost completed in 7.398524945s
	I1221 18:24:49.457476  103175 start.go:83] releasing machines lock for "multinode-186629-m02", held for 7.398638494s
	I1221 18:24:49.457545  103175 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629-m02
	I1221 18:24:49.476434  103175 out.go:177] * Found network options:
	I1221 18:24:49.477854  103175 out.go:177]   - NO_PROXY=192.168.58.2
	W1221 18:24:49.479149  103175 proxy.go:119] fail to check proxy env: Error ip not in block
	W1221 18:24:49.479179  103175 proxy.go:119] fail to check proxy env: Error ip not in block
	I1221 18:24:49.479235  103175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:24:49.479269  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:49.479341  103175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:24:49.479403  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:24:49.494854  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:24:49.495179  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:24:49.657448  103175 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1221 18:24:49.709253  103175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:24:49.713099  103175 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1221 18:24:49.713122  103175 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1221 18:24:49.713133  103175 command_runner.go:130] > Device: b0h/176d	Inode: 577309      Links: 1
	I1221 18:24:49.713144  103175 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:24:49.713158  103175 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1221 18:24:49.713166  103175 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1221 18:24:49.713178  103175 command_runner.go:130] > Change: 2023-12-21 18:04:50.542282371 +0000
	I1221 18:24:49.713190  103175 command_runner.go:130] >  Birth: 2023-12-21 18:04:50.542282371 +0000
	I1221 18:24:49.713274  103175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:24:49.729269  103175 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:24:49.729328  103175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:24:49.753211  103175 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1221 18:24:49.753332  103175 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1221 18:24:49.753348  103175 start.go:475] detecting cgroup driver to use...
	I1221 18:24:49.753379  103175 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:24:49.753421  103175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:24:49.765569  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:24:49.774343  103175 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:24:49.774391  103175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:24:49.785392  103175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:24:49.796863  103175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1221 18:24:49.864170  103175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:24:49.875982  103175 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1221 18:24:49.939994  103175 docker.go:219] disabling docker service ...
	I1221 18:24:49.940045  103175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:24:49.955485  103175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:24:49.964784  103175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:24:50.044219  103175 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1221 18:24:50.044297  103175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:24:50.125106  103175 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1221 18:24:50.125171  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:24:50.134580  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:24:50.147381  103175 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1221 18:24:50.148159  103175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1221 18:24:50.148218  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:24:50.156079  103175 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1221 18:24:50.156137  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:24:50.163908  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:24:50.171627  103175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:24:50.179436  103175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1221 18:24:50.186541  103175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1221 18:24:50.192660  103175 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1221 18:24:50.193347  103175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1221 18:24:50.200206  103175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1221 18:24:50.263618  103175 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1221 18:24:50.355881  103175 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1221 18:24:50.355953  103175 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1221 18:24:50.359039  103175 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1221 18:24:50.359056  103175 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1221 18:24:50.359063  103175 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1221 18:24:50.359070  103175 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:24:50.359075  103175 command_runner.go:130] > Access: 2023-12-21 18:24:50.343938677 +0000
	I1221 18:24:50.359081  103175 command_runner.go:130] > Modify: 2023-12-21 18:24:50.343938677 +0000
	I1221 18:24:50.359086  103175 command_runner.go:130] > Change: 2023-12-21 18:24:50.343938677 +0000
	I1221 18:24:50.359090  103175 command_runner.go:130] >  Birth: -
	I1221 18:24:50.359125  103175 start.go:543] Will wait 60s for crictl version
	I1221 18:24:50.359159  103175 ssh_runner.go:195] Run: which crictl
	I1221 18:24:50.361757  103175 command_runner.go:130] > /usr/bin/crictl
	I1221 18:24:50.361884  103175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1221 18:24:50.390991  103175 command_runner.go:130] > Version:  0.1.0
	I1221 18:24:50.391009  103175 command_runner.go:130] > RuntimeName:  cri-o
	I1221 18:24:50.391014  103175 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1221 18:24:50.391019  103175 command_runner.go:130] > RuntimeApiVersion:  v1
	I1221 18:24:50.391033  103175 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1221 18:24:50.391075  103175 ssh_runner.go:195] Run: crio --version
	I1221 18:24:50.420627  103175 command_runner.go:130] > crio version 1.24.6
	I1221 18:24:50.420650  103175 command_runner.go:130] > Version:          1.24.6
	I1221 18:24:50.420661  103175 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1221 18:24:50.420666  103175 command_runner.go:130] > GitTreeState:     clean
	I1221 18:24:50.420676  103175 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1221 18:24:50.420684  103175 command_runner.go:130] > GoVersion:        go1.18.2
	I1221 18:24:50.420692  103175 command_runner.go:130] > Compiler:         gc
	I1221 18:24:50.420701  103175 command_runner.go:130] > Platform:         linux/amd64
	I1221 18:24:50.420714  103175 command_runner.go:130] > Linkmode:         dynamic
	I1221 18:24:50.420728  103175 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1221 18:24:50.420740  103175 command_runner.go:130] > SeccompEnabled:   true
	I1221 18:24:50.420750  103175 command_runner.go:130] > AppArmorEnabled:  false
	I1221 18:24:50.422044  103175 ssh_runner.go:195] Run: crio --version
	I1221 18:24:50.452178  103175 command_runner.go:130] > crio version 1.24.6
	I1221 18:24:50.452197  103175 command_runner.go:130] > Version:          1.24.6
	I1221 18:24:50.452204  103175 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1221 18:24:50.452209  103175 command_runner.go:130] > GitTreeState:     clean
	I1221 18:24:50.452215  103175 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1221 18:24:50.452220  103175 command_runner.go:130] > GoVersion:        go1.18.2
	I1221 18:24:50.452224  103175 command_runner.go:130] > Compiler:         gc
	I1221 18:24:50.452229  103175 command_runner.go:130] > Platform:         linux/amd64
	I1221 18:24:50.452235  103175 command_runner.go:130] > Linkmode:         dynamic
	I1221 18:24:50.452242  103175 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1221 18:24:50.452249  103175 command_runner.go:130] > SeccompEnabled:   true
	I1221 18:24:50.452254  103175 command_runner.go:130] > AppArmorEnabled:  false
	I1221 18:24:50.454084  103175 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1221 18:24:50.455450  103175 out.go:177]   - env NO_PROXY=192.168.58.2
	I1221 18:24:50.456715  103175 cli_runner.go:164] Run: docker network inspect multinode-186629 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1221 18:24:50.471722  103175 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1221 18:24:50.474830  103175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:24:50.483976  103175 certs.go:56] Setting up /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629 for IP: 192.168.58.3
	I1221 18:24:50.484005  103175 certs.go:190] acquiring lock for shared ca certs: {Name:mk1a19dbb52a881fd398c5196f3505713dce7712 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:24:50.484158  103175 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key
	I1221 18:24:50.484207  103175 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key
	I1221 18:24:50.484223  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1221 18:24:50.484242  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1221 18:24:50.484259  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1221 18:24:50.484274  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1221 18:24:50.484335  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem (1338 bytes)
	W1221 18:24:50.484373  103175 certs.go:433] ignoring /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664_empty.pem, impossibly tiny 0 bytes
	I1221 18:24:50.484391  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem (1679 bytes)
	I1221 18:24:50.484426  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem (1078 bytes)
	I1221 18:24:50.484457  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem (1123 bytes)
	I1221 18:24:50.484491  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem (1679 bytes)
	I1221 18:24:50.484546  103175 certs.go:437] found cert: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:24:50.484583  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> /usr/share/ca-certificates/166642.pem
	I1221 18:24:50.484610  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:24:50.484695  103175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem -> /usr/share/ca-certificates/16664.pem
	I1221 18:24:50.485141  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1221 18:24:50.504866  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1221 18:24:50.524528  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1221 18:24:50.543519  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1221 18:24:50.562526  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /usr/share/ca-certificates/166642.pem (1708 bytes)
	I1221 18:24:50.581510  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1221 18:24:50.600855  103175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/16664.pem --> /usr/share/ca-certificates/16664.pem (1338 bytes)
	I1221 18:24:50.620121  103175 ssh_runner.go:195] Run: openssl version
	I1221 18:24:50.624411  103175 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1221 18:24:50.624614  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166642.pem && ln -fs /usr/share/ca-certificates/166642.pem /etc/ssl/certs/166642.pem"
	I1221 18:24:50.632030  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166642.pem
	I1221 18:24:50.634814  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 21 18:11 /usr/share/ca-certificates/166642.pem
	I1221 18:24:50.634845  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 21 18:11 /usr/share/ca-certificates/166642.pem
	I1221 18:24:50.634881  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166642.pem
	I1221 18:24:50.640353  103175 command_runner.go:130] > 3ec20f2e
	I1221 18:24:50.640576  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166642.pem /etc/ssl/certs/3ec20f2e.0"
	I1221 18:24:50.647897  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1221 18:24:50.655350  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:24:50.658045  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:24:50.658082  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 21 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:24:50.658117  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1221 18:24:50.663563  103175 command_runner.go:130] > b5213941
	I1221 18:24:50.663790  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1221 18:24:50.671199  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16664.pem && ln -fs /usr/share/ca-certificates/16664.pem /etc/ssl/certs/16664.pem"
	I1221 18:24:50.678693  103175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16664.pem
	I1221 18:24:50.681521  103175 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 21 18:11 /usr/share/ca-certificates/16664.pem
	I1221 18:24:50.681548  103175 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 21 18:11 /usr/share/ca-certificates/16664.pem
	I1221 18:24:50.681582  103175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16664.pem
	I1221 18:24:50.686994  103175 command_runner.go:130] > 51391683
	I1221 18:24:50.687216  103175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16664.pem /etc/ssl/certs/51391683.0"
	I1221 18:24:50.695069  103175 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1221 18:24:50.697836  103175 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:24:50.697861  103175 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1221 18:24:50.697933  103175 ssh_runner.go:195] Run: crio config
	I1221 18:24:50.730077  103175 command_runner.go:130] ! time="2023-12-21 18:24:50.729808219Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1221 18:24:50.730106  103175 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1221 18:24:50.734842  103175 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1221 18:24:50.734867  103175 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1221 18:24:50.734878  103175 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1221 18:24:50.734883  103175 command_runner.go:130] > #
	I1221 18:24:50.734890  103175 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1221 18:24:50.734899  103175 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1221 18:24:50.734905  103175 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1221 18:24:50.734916  103175 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1221 18:24:50.734920  103175 command_runner.go:130] > # reload'.
	I1221 18:24:50.734926  103175 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1221 18:24:50.734934  103175 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1221 18:24:50.734941  103175 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1221 18:24:50.734954  103175 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1221 18:24:50.734957  103175 command_runner.go:130] > [crio]
	I1221 18:24:50.734963  103175 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1221 18:24:50.734970  103175 command_runner.go:130] > # containers images, in this directory.
	I1221 18:24:50.734978  103175 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1221 18:24:50.734987  103175 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1221 18:24:50.734992  103175 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1221 18:24:50.735000  103175 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1221 18:24:50.735008  103175 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1221 18:24:50.735016  103175 command_runner.go:130] > # storage_driver = "vfs"
	I1221 18:24:50.735024  103175 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1221 18:24:50.735032  103175 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1221 18:24:50.735039  103175 command_runner.go:130] > # storage_option = [
	I1221 18:24:50.735043  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735051  103175 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1221 18:24:50.735061  103175 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1221 18:24:50.735069  103175 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1221 18:24:50.735077  103175 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1221 18:24:50.735084  103175 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1221 18:24:50.735090  103175 command_runner.go:130] > # always happen on a node reboot
	I1221 18:24:50.735100  103175 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1221 18:24:50.735108  103175 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1221 18:24:50.735116  103175 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1221 18:24:50.735127  103175 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1221 18:24:50.735134  103175 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1221 18:24:50.735145  103175 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1221 18:24:50.735154  103175 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1221 18:24:50.735161  103175 command_runner.go:130] > # internal_wipe = true
	I1221 18:24:50.735167  103175 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1221 18:24:50.735179  103175 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1221 18:24:50.735191  103175 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1221 18:24:50.735203  103175 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1221 18:24:50.735211  103175 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1221 18:24:50.735217  103175 command_runner.go:130] > [crio.api]
	I1221 18:24:50.735223  103175 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1221 18:24:50.735230  103175 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1221 18:24:50.735236  103175 command_runner.go:130] > # IP address on which the stream server will listen.
	I1221 18:24:50.735243  103175 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1221 18:24:50.735249  103175 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1221 18:24:50.735256  103175 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1221 18:24:50.735261  103175 command_runner.go:130] > # stream_port = "0"
	I1221 18:24:50.735268  103175 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1221 18:24:50.735272  103175 command_runner.go:130] > # stream_enable_tls = false
	I1221 18:24:50.735280  103175 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1221 18:24:50.735285  103175 command_runner.go:130] > # stream_idle_timeout = ""
	I1221 18:24:50.735295  103175 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1221 18:24:50.735303  103175 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1221 18:24:50.735309  103175 command_runner.go:130] > # minutes.
	I1221 18:24:50.735313  103175 command_runner.go:130] > # stream_tls_cert = ""
	I1221 18:24:50.735321  103175 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1221 18:24:50.735329  103175 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1221 18:24:50.735335  103175 command_runner.go:130] > # stream_tls_key = ""
	I1221 18:24:50.735341  103175 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1221 18:24:50.735349  103175 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1221 18:24:50.735357  103175 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1221 18:24:50.735361  103175 command_runner.go:130] > # stream_tls_ca = ""
	I1221 18:24:50.735369  103175 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1221 18:24:50.735375  103175 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1221 18:24:50.735382  103175 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1221 18:24:50.735389  103175 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1221 18:24:50.735404  103175 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1221 18:24:50.735415  103175 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1221 18:24:50.735419  103175 command_runner.go:130] > [crio.runtime]
	I1221 18:24:50.735424  103175 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1221 18:24:50.735433  103175 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1221 18:24:50.735436  103175 command_runner.go:130] > # "nofile=1024:2048"
	I1221 18:24:50.735445  103175 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1221 18:24:50.735451  103175 command_runner.go:130] > # default_ulimits = [
	I1221 18:24:50.735455  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735463  103175 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1221 18:24:50.735469  103175 command_runner.go:130] > # no_pivot = false
	I1221 18:24:50.735475  103175 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1221 18:24:50.735483  103175 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1221 18:24:50.735490  103175 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1221 18:24:50.735498  103175 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1221 18:24:50.735505  103175 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1221 18:24:50.735512  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1221 18:24:50.735518  103175 command_runner.go:130] > # conmon = ""
	I1221 18:24:50.735522  103175 command_runner.go:130] > # Cgroup setting for conmon
	I1221 18:24:50.735529  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1221 18:24:50.735535  103175 command_runner.go:130] > conmon_cgroup = "pod"
	I1221 18:24:50.735542  103175 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1221 18:24:50.735550  103175 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1221 18:24:50.735559  103175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1221 18:24:50.735565  103175 command_runner.go:130] > # conmon_env = [
	I1221 18:24:50.735569  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735576  103175 command_runner.go:130] > # Additional environment variables to set for all the
	I1221 18:24:50.735584  103175 command_runner.go:130] > # containers. These are overridden if set in the
	I1221 18:24:50.735589  103175 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1221 18:24:50.735596  103175 command_runner.go:130] > # default_env = [
	I1221 18:24:50.735599  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735607  103175 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1221 18:24:50.735612  103175 command_runner.go:130] > # selinux = false
	I1221 18:24:50.735620  103175 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1221 18:24:50.735626  103175 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1221 18:24:50.735634  103175 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1221 18:24:50.735640  103175 command_runner.go:130] > # seccomp_profile = ""
	I1221 18:24:50.735646  103175 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1221 18:24:50.735653  103175 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1221 18:24:50.735662  103175 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1221 18:24:50.735668  103175 command_runner.go:130] > # which might increase security.
	I1221 18:24:50.735673  103175 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1221 18:24:50.735683  103175 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1221 18:24:50.735691  103175 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1221 18:24:50.735697  103175 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1221 18:24:50.735705  103175 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1221 18:24:50.735712  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:24:50.735716  103175 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1221 18:24:50.735724  103175 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1221 18:24:50.735731  103175 command_runner.go:130] > # the cgroup blockio controller.
	I1221 18:24:50.735735  103175 command_runner.go:130] > # blockio_config_file = ""
	I1221 18:24:50.735744  103175 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1221 18:24:50.735750  103175 command_runner.go:130] > # irqbalance daemon.
	I1221 18:24:50.735755  103175 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1221 18:24:50.735764  103175 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1221 18:24:50.735771  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:24:50.735776  103175 command_runner.go:130] > # rdt_config_file = ""
	I1221 18:24:50.735786  103175 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1221 18:24:50.735793  103175 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1221 18:24:50.735799  103175 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1221 18:24:50.735805  103175 command_runner.go:130] > # separate_pull_cgroup = ""
	I1221 18:24:50.735811  103175 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1221 18:24:50.735819  103175 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1221 18:24:50.735825  103175 command_runner.go:130] > # will be added.
	I1221 18:24:50.735830  103175 command_runner.go:130] > # default_capabilities = [
	I1221 18:24:50.735835  103175 command_runner.go:130] > # 	"CHOWN",
	I1221 18:24:50.735840  103175 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1221 18:24:50.735845  103175 command_runner.go:130] > # 	"FSETID",
	I1221 18:24:50.735849  103175 command_runner.go:130] > # 	"FOWNER",
	I1221 18:24:50.735856  103175 command_runner.go:130] > # 	"SETGID",
	I1221 18:24:50.735861  103175 command_runner.go:130] > # 	"SETUID",
	I1221 18:24:50.735865  103175 command_runner.go:130] > # 	"SETPCAP",
	I1221 18:24:50.735872  103175 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1221 18:24:50.735879  103175 command_runner.go:130] > # 	"KILL",
	I1221 18:24:50.735882  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735892  103175 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1221 18:24:50.735900  103175 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1221 18:24:50.735907  103175 command_runner.go:130] > # add_inheritable_capabilities = true
	I1221 18:24:50.735913  103175 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1221 18:24:50.735921  103175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1221 18:24:50.735928  103175 command_runner.go:130] > # default_sysctls = [
	I1221 18:24:50.735932  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735937  103175 command_runner.go:130] > # List of devices on the host that a
	I1221 18:24:50.735944  103175 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1221 18:24:50.735948  103175 command_runner.go:130] > # allowed_devices = [
	I1221 18:24:50.735952  103175 command_runner.go:130] > # 	"/dev/fuse",
	I1221 18:24:50.735956  103175 command_runner.go:130] > # ]
	I1221 18:24:50.735969  103175 command_runner.go:130] > # List of additional devices. specified as
	I1221 18:24:50.735989  103175 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1221 18:24:50.735997  103175 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1221 18:24:50.736005  103175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1221 18:24:50.736010  103175 command_runner.go:130] > # additional_devices = [
	I1221 18:24:50.736013  103175 command_runner.go:130] > # ]
	I1221 18:24:50.736021  103175 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1221 18:24:50.736026  103175 command_runner.go:130] > # cdi_spec_dirs = [
	I1221 18:24:50.736032  103175 command_runner.go:130] > # 	"/etc/cdi",
	I1221 18:24:50.736036  103175 command_runner.go:130] > # 	"/var/run/cdi",
	I1221 18:24:50.736042  103175 command_runner.go:130] > # ]
	I1221 18:24:50.736048  103175 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1221 18:24:50.736056  103175 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1221 18:24:50.736062  103175 command_runner.go:130] > # Defaults to false.
	I1221 18:24:50.736067  103175 command_runner.go:130] > # device_ownership_from_security_context = false
	I1221 18:24:50.736076  103175 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1221 18:24:50.736084  103175 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1221 18:24:50.736088  103175 command_runner.go:130] > # hooks_dir = [
	I1221 18:24:50.736092  103175 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1221 18:24:50.736103  103175 command_runner.go:130] > # ]
	I1221 18:24:50.736109  103175 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1221 18:24:50.736118  103175 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1221 18:24:50.736125  103175 command_runner.go:130] > # its default mounts from the following two files:
	I1221 18:24:50.736130  103175 command_runner.go:130] > #
	I1221 18:24:50.736136  103175 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1221 18:24:50.736145  103175 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1221 18:24:50.736153  103175 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1221 18:24:50.736158  103175 command_runner.go:130] > #
	I1221 18:24:50.736165  103175 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1221 18:24:50.736173  103175 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1221 18:24:50.736179  103175 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1221 18:24:50.736187  103175 command_runner.go:130] > #      only add mounts it finds in this file.
	I1221 18:24:50.736190  103175 command_runner.go:130] > #
	I1221 18:24:50.736197  103175 command_runner.go:130] > # default_mounts_file = ""
	I1221 18:24:50.736210  103175 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1221 18:24:50.736219  103175 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1221 18:24:50.736225  103175 command_runner.go:130] > # pids_limit = 0
	I1221 18:24:50.736232  103175 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1221 18:24:50.736240  103175 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1221 18:24:50.736248  103175 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1221 18:24:50.736258  103175 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1221 18:24:50.736264  103175 command_runner.go:130] > # log_size_max = -1
	I1221 18:24:50.736272  103175 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1221 18:24:50.736278  103175 command_runner.go:130] > # log_to_journald = false
	I1221 18:24:50.736284  103175 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1221 18:24:50.736292  103175 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1221 18:24:50.736297  103175 command_runner.go:130] > # Path to directory for container attach sockets.
	I1221 18:24:50.736305  103175 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1221 18:24:50.736313  103175 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1221 18:24:50.736319  103175 command_runner.go:130] > # bind_mount_prefix = ""
	I1221 18:24:50.736325  103175 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1221 18:24:50.736331  103175 command_runner.go:130] > # read_only = false
	I1221 18:24:50.736337  103175 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1221 18:24:50.736346  103175 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1221 18:24:50.736352  103175 command_runner.go:130] > # live configuration reload.
	I1221 18:24:50.736356  103175 command_runner.go:130] > # log_level = "info"
	I1221 18:24:50.736364  103175 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1221 18:24:50.736371  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:24:50.736377  103175 command_runner.go:130] > # log_filter = ""
	I1221 18:24:50.736383  103175 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1221 18:24:50.736391  103175 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1221 18:24:50.736397  103175 command_runner.go:130] > # separated by comma.
	I1221 18:24:50.736401  103175 command_runner.go:130] > # uid_mappings = ""
	I1221 18:24:50.736409  103175 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1221 18:24:50.736418  103175 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1221 18:24:50.736424  103175 command_runner.go:130] > # separated by comma.
	I1221 18:24:50.736428  103175 command_runner.go:130] > # gid_mappings = ""
	I1221 18:24:50.736436  103175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1221 18:24:50.736444  103175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1221 18:24:50.736453  103175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1221 18:24:50.736458  103175 command_runner.go:130] > # minimum_mappable_uid = -1
	I1221 18:24:50.736464  103175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1221 18:24:50.736473  103175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1221 18:24:50.736481  103175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1221 18:24:50.736487  103175 command_runner.go:130] > # minimum_mappable_gid = -1
	I1221 18:24:50.736493  103175 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1221 18:24:50.736501  103175 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1221 18:24:50.736509  103175 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1221 18:24:50.736515  103175 command_runner.go:130] > # ctr_stop_timeout = 30
	I1221 18:24:50.736523  103175 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1221 18:24:50.736530  103175 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1221 18:24:50.736537  103175 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1221 18:24:50.736542  103175 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1221 18:24:50.736548  103175 command_runner.go:130] > # drop_infra_ctr = true
	I1221 18:24:50.736555  103175 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1221 18:24:50.736563  103175 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1221 18:24:50.736572  103175 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1221 18:24:50.736578  103175 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1221 18:24:50.736584  103175 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1221 18:24:50.736591  103175 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1221 18:24:50.736595  103175 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1221 18:24:50.736604  103175 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1221 18:24:50.736610  103175 command_runner.go:130] > # pinns_path = ""
	I1221 18:24:50.736616  103175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1221 18:24:50.736624  103175 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1221 18:24:50.736632  103175 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1221 18:24:50.736639  103175 command_runner.go:130] > # default_runtime = "runc"
	I1221 18:24:50.736644  103175 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1221 18:24:50.736654  103175 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1221 18:24:50.736664  103175 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1221 18:24:50.736672  103175 command_runner.go:130] > # creation as a file is not desired either.
	I1221 18:24:50.736682  103175 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1221 18:24:50.736689  103175 command_runner.go:130] > # the hostname is being managed dynamically.
	I1221 18:24:50.736694  103175 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1221 18:24:50.736699  103175 command_runner.go:130] > # ]
	I1221 18:24:50.736705  103175 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1221 18:24:50.736713  103175 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1221 18:24:50.736721  103175 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1221 18:24:50.736729  103175 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1221 18:24:50.736735  103175 command_runner.go:130] > #
	I1221 18:24:50.736740  103175 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1221 18:24:50.736747  103175 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1221 18:24:50.736751  103175 command_runner.go:130] > #  runtime_type = "oci"
	I1221 18:24:50.736758  103175 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1221 18:24:50.736763  103175 command_runner.go:130] > #  privileged_without_host_devices = false
	I1221 18:24:50.736770  103175 command_runner.go:130] > #  allowed_annotations = []
	I1221 18:24:50.736773  103175 command_runner.go:130] > # Where:
	I1221 18:24:50.736779  103175 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1221 18:24:50.736787  103175 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1221 18:24:50.736795  103175 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1221 18:24:50.736804  103175 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1221 18:24:50.736810  103175 command_runner.go:130] > #   in $PATH.
	I1221 18:24:50.736816  103175 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1221 18:24:50.736823  103175 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1221 18:24:50.736829  103175 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1221 18:24:50.736835  103175 command_runner.go:130] > #   state.
	I1221 18:24:50.736841  103175 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1221 18:24:50.736849  103175 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1221 18:24:50.736857  103175 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1221 18:24:50.736863  103175 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1221 18:24:50.736871  103175 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1221 18:24:50.736880  103175 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1221 18:24:50.736887  103175 command_runner.go:130] > #   The currently recognized values are:
	I1221 18:24:50.736893  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1221 18:24:50.736903  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1221 18:24:50.736911  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1221 18:24:50.736919  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1221 18:24:50.736927  103175 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1221 18:24:50.736935  103175 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1221 18:24:50.736941  103175 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1221 18:24:50.736948  103175 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1221 18:24:50.736955  103175 command_runner.go:130] > #   should be moved to the container's cgroup
	I1221 18:24:50.736959  103175 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1221 18:24:50.736966  103175 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1221 18:24:50.736970  103175 command_runner.go:130] > runtime_type = "oci"
	I1221 18:24:50.736975  103175 command_runner.go:130] > runtime_root = "/run/runc"
	I1221 18:24:50.736979  103175 command_runner.go:130] > runtime_config_path = ""
	I1221 18:24:50.736985  103175 command_runner.go:130] > monitor_path = ""
	I1221 18:24:50.736989  103175 command_runner.go:130] > monitor_cgroup = ""
	I1221 18:24:50.736996  103175 command_runner.go:130] > monitor_exec_cgroup = ""
	I1221 18:24:50.737040  103175 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1221 18:24:50.737051  103175 command_runner.go:130] > # running containers
	I1221 18:24:50.737056  103175 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1221 18:24:50.737062  103175 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1221 18:24:50.737068  103175 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1221 18:24:50.737079  103175 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1221 18:24:50.737090  103175 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1221 18:24:50.737102  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1221 18:24:50.737113  103175 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1221 18:24:50.737122  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1221 18:24:50.737127  103175 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1221 18:24:50.737134  103175 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1221 18:24:50.737143  103175 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1221 18:24:50.737152  103175 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1221 18:24:50.737158  103175 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1221 18:24:50.737168  103175 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1221 18:24:50.737184  103175 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1221 18:24:50.737197  103175 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1221 18:24:50.737214  103175 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1221 18:24:50.737224  103175 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1221 18:24:50.737252  103175 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1221 18:24:50.737266  103175 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1221 18:24:50.737275  103175 command_runner.go:130] > # Example:
	I1221 18:24:50.737280  103175 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1221 18:24:50.737288  103175 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1221 18:24:50.737293  103175 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1221 18:24:50.737301  103175 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1221 18:24:50.737305  103175 command_runner.go:130] > # cpuset = 0
	I1221 18:24:50.737312  103175 command_runner.go:130] > # cpushares = "0-1"
	I1221 18:24:50.737316  103175 command_runner.go:130] > # Where:
	I1221 18:24:50.737322  103175 command_runner.go:130] > # The workload name is workload-type.
	I1221 18:24:50.737329  103175 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1221 18:24:50.737337  103175 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1221 18:24:50.737342  103175 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1221 18:24:50.737352  103175 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1221 18:24:50.737360  103175 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1221 18:24:50.737367  103175 command_runner.go:130] > # 
	I1221 18:24:50.737373  103175 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1221 18:24:50.737379  103175 command_runner.go:130] > #
	I1221 18:24:50.737385  103175 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1221 18:24:50.737393  103175 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1221 18:24:50.737401  103175 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1221 18:24:50.737409  103175 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1221 18:24:50.737417  103175 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1221 18:24:50.737423  103175 command_runner.go:130] > [crio.image]
	I1221 18:24:50.737429  103175 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1221 18:24:50.737436  103175 command_runner.go:130] > # default_transport = "docker://"
	I1221 18:24:50.737442  103175 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1221 18:24:50.737451  103175 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1221 18:24:50.737457  103175 command_runner.go:130] > # global_auth_file = ""
	I1221 18:24:50.737463  103175 command_runner.go:130] > # The image used to instantiate infra containers.
	I1221 18:24:50.737470  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:24:50.737475  103175 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1221 18:24:50.737483  103175 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1221 18:24:50.737492  103175 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1221 18:24:50.737499  103175 command_runner.go:130] > # This option supports live configuration reload.
	I1221 18:24:50.737505  103175 command_runner.go:130] > # pause_image_auth_file = ""
	I1221 18:24:50.737511  103175 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1221 18:24:50.737521  103175 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1221 18:24:50.737529  103175 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1221 18:24:50.737538  103175 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1221 18:24:50.737545  103175 command_runner.go:130] > # pause_command = "/pause"
	I1221 18:24:50.737551  103175 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1221 18:24:50.737559  103175 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1221 18:24:50.737567  103175 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1221 18:24:50.737575  103175 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1221 18:24:50.737582  103175 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1221 18:24:50.737589  103175 command_runner.go:130] > # signature_policy = ""
	I1221 18:24:50.737598  103175 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1221 18:24:50.737607  103175 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1221 18:24:50.737613  103175 command_runner.go:130] > # changing them here.
	I1221 18:24:50.737617  103175 command_runner.go:130] > # insecure_registries = [
	I1221 18:24:50.737624  103175 command_runner.go:130] > # ]
	I1221 18:24:50.737630  103175 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1221 18:24:50.737638  103175 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1221 18:24:50.737645  103175 command_runner.go:130] > # image_volumes = "mkdir"
	I1221 18:24:50.737650  103175 command_runner.go:130] > # Temporary directory to use for storing big files
	I1221 18:24:50.737657  103175 command_runner.go:130] > # big_files_temporary_dir = ""
	I1221 18:24:50.737663  103175 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1221 18:24:50.737669  103175 command_runner.go:130] > # CNI plugins.
	I1221 18:24:50.737673  103175 command_runner.go:130] > [crio.network]
	I1221 18:24:50.737680  103175 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1221 18:24:50.737688  103175 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1221 18:24:50.737694  103175 command_runner.go:130] > # cni_default_network = ""
	I1221 18:24:50.737700  103175 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1221 18:24:50.737707  103175 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1221 18:24:50.737713  103175 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1221 18:24:50.737719  103175 command_runner.go:130] > # plugin_dirs = [
	I1221 18:24:50.737723  103175 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1221 18:24:50.737729  103175 command_runner.go:130] > # ]
	I1221 18:24:50.737735  103175 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1221 18:24:50.737741  103175 command_runner.go:130] > [crio.metrics]
	I1221 18:24:50.737746  103175 command_runner.go:130] > # Globally enable or disable metrics support.
	I1221 18:24:50.737753  103175 command_runner.go:130] > # enable_metrics = false
	I1221 18:24:50.737758  103175 command_runner.go:130] > # Specify enabled metrics collectors.
	I1221 18:24:50.737764  103175 command_runner.go:130] > # Per default all metrics are enabled.
	I1221 18:24:50.737770  103175 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1221 18:24:50.737781  103175 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1221 18:24:50.737790  103175 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1221 18:24:50.737794  103175 command_runner.go:130] > # metrics_collectors = [
	I1221 18:24:50.737798  103175 command_runner.go:130] > # 	"operations",
	I1221 18:24:50.737805  103175 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1221 18:24:50.737810  103175 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1221 18:24:50.737816  103175 command_runner.go:130] > # 	"operations_errors",
	I1221 18:24:50.737821  103175 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1221 18:24:50.737827  103175 command_runner.go:130] > # 	"image_pulls_by_name",
	I1221 18:24:50.737832  103175 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1221 18:24:50.737836  103175 command_runner.go:130] > # 	"image_pulls_failures",
	I1221 18:24:50.737851  103175 command_runner.go:130] > # 	"image_pulls_successes",
	I1221 18:24:50.737858  103175 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1221 18:24:50.737862  103175 command_runner.go:130] > # 	"image_layer_reuse",
	I1221 18:24:50.737867  103175 command_runner.go:130] > # 	"containers_oom_total",
	I1221 18:24:50.737871  103175 command_runner.go:130] > # 	"containers_oom",
	I1221 18:24:50.737877  103175 command_runner.go:130] > # 	"processes_defunct",
	I1221 18:24:50.737881  103175 command_runner.go:130] > # 	"operations_total",
	I1221 18:24:50.737887  103175 command_runner.go:130] > # 	"operations_latency_seconds",
	I1221 18:24:50.737892  103175 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1221 18:24:50.737898  103175 command_runner.go:130] > # 	"operations_errors_total",
	I1221 18:24:50.737902  103175 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1221 18:24:50.737909  103175 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1221 18:24:50.737914  103175 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1221 18:24:50.737918  103175 command_runner.go:130] > # 	"image_pulls_success_total",
	I1221 18:24:50.737924  103175 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1221 18:24:50.737929  103175 command_runner.go:130] > # 	"containers_oom_count_total",
	I1221 18:24:50.737934  103175 command_runner.go:130] > # ]
	I1221 18:24:50.737940  103175 command_runner.go:130] > # The port on which the metrics server will listen.
	I1221 18:24:50.737946  103175 command_runner.go:130] > # metrics_port = 9090
	I1221 18:24:50.737951  103175 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1221 18:24:50.737957  103175 command_runner.go:130] > # metrics_socket = ""
	I1221 18:24:50.737962  103175 command_runner.go:130] > # The certificate for the secure metrics server.
	I1221 18:24:50.737970  103175 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1221 18:24:50.737979  103175 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1221 18:24:50.737987  103175 command_runner.go:130] > # certificate on any modification event.
	I1221 18:24:50.737993  103175 command_runner.go:130] > # metrics_cert = ""
	I1221 18:24:50.737999  103175 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1221 18:24:50.738006  103175 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1221 18:24:50.738010  103175 command_runner.go:130] > # metrics_key = ""
	I1221 18:24:50.738017  103175 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1221 18:24:50.738023  103175 command_runner.go:130] > [crio.tracing]
	I1221 18:24:50.738029  103175 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1221 18:24:50.738035  103175 command_runner.go:130] > # enable_tracing = false
	I1221 18:24:50.738040  103175 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1221 18:24:50.738047  103175 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1221 18:24:50.738052  103175 command_runner.go:130] > # Number of samples to collect per million spans.
	I1221 18:24:50.738060  103175 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1221 18:24:50.738067  103175 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1221 18:24:50.738072  103175 command_runner.go:130] > [crio.stats]
	I1221 18:24:50.738078  103175 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1221 18:24:50.738086  103175 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1221 18:24:50.738090  103175 command_runner.go:130] > # stats_collection_period = 0
	I1221 18:24:50.738151  103175 cni.go:84] Creating CNI manager for ""
	I1221 18:24:50.738159  103175 cni.go:136] 2 nodes found, recommending kindnet
	I1221 18:24:50.738166  103175 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1221 18:24:50.738182  103175 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-186629 NodeName:multinode-186629-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1221 18:24:50.738280  103175 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-186629-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1221 18:24:50.738330  103175 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-186629-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1221 18:24:50.738368  103175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1221 18:24:50.745760  103175 command_runner.go:130] > kubeadm
	I1221 18:24:50.745776  103175 command_runner.go:130] > kubectl
	I1221 18:24:50.745780  103175 command_runner.go:130] > kubelet
	I1221 18:24:50.745797  103175 binaries.go:44] Found k8s binaries, skipping transfer
	I1221 18:24:50.745840  103175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1221 18:24:50.753087  103175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1221 18:24:50.767719  103175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1221 18:24:50.782077  103175 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1221 18:24:50.784727  103175 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1221 18:24:50.793505  103175 host.go:66] Checking if "multinode-186629" exists ...
	I1221 18:24:50.793749  103175 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:24:50.793728  103175 start.go:304] JoinCluster: &{Name:multinode-186629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-186629 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:24:50.793798  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1221 18:24:50.793836  103175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:24:50.809224  103175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:24:50.943994  103175 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gpjui6.88ztg2v1mz2qbn2i --discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b 
	I1221 18:24:50.947544  103175 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1221 18:24:50.947593  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpjui6.88ztg2v1mz2qbn2i --discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-186629-m02"
	I1221 18:24:50.979888  103175 command_runner.go:130] > [preflight] Running pre-flight checks
	I1221 18:24:51.005777  103175 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1221 18:24:51.005807  103175 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1221 18:24:51.005816  103175 command_runner.go:130] > OS: Linux
	I1221 18:24:51.005828  103175 command_runner.go:130] > CGROUPS_CPU: enabled
	I1221 18:24:51.005837  103175 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1221 18:24:51.005845  103175 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1221 18:24:51.005857  103175 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1221 18:24:51.005869  103175 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1221 18:24:51.005881  103175 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1221 18:24:51.005897  103175 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1221 18:24:51.005909  103175 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1221 18:24:51.005919  103175 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1221 18:24:51.081455  103175 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1221 18:24:51.081487  103175 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1221 18:24:51.103932  103175 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1221 18:24:51.104014  103175 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1221 18:24:51.104030  103175 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1221 18:24:51.172042  103175 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1221 18:24:53.184830  103175 command_runner.go:130] > This node has joined the cluster:
	I1221 18:24:53.184858  103175 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1221 18:24:53.184868  103175 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1221 18:24:53.184878  103175 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1221 18:24:53.187481  103175 command_runner.go:130] ! W1221 18:24:50.979591    1107 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1221 18:24:53.187515  103175 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1221 18:24:53.187532  103175 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1221 18:24:53.187554  103175 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gpjui6.88ztg2v1mz2qbn2i --discovery-token-ca-cert-hash sha256:ce55a46d5554fd73a9c46ea86d4565f651b48b614f1763c13cc6507a4e4d186b --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-186629-m02": (2.239946521s)
	I1221 18:24:53.187581  103175 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1221 18:24:53.340409  103175 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1221 18:24:53.340506  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea minikube.k8s.io/name=multinode-186629 minikube.k8s.io/updated_at=2023_12_21T18_24_53_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1221 18:24:53.407166  103175 command_runner.go:130] > node/multinode-186629-m02 labeled
	I1221 18:24:53.407202  103175 start.go:306] JoinCluster complete in 2.613472712s
	I1221 18:24:53.407214  103175 cni.go:84] Creating CNI manager for ""
	I1221 18:24:53.407221  103175 cni.go:136] 2 nodes found, recommending kindnet
	I1221 18:24:53.407270  103175 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1221 18:24:53.410561  103175 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1221 18:24:53.410591  103175 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1221 18:24:53.410610  103175 command_runner.go:130] > Device: 37h/55d	Inode: 582225      Links: 1
	I1221 18:24:53.410622  103175 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1221 18:24:53.410634  103175 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1221 18:24:53.410643  103175 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1221 18:24:53.410648  103175 command_runner.go:130] > Change: 2023-12-21 18:04:50.938311966 +0000
	I1221 18:24:53.410654  103175 command_runner.go:130] >  Birth: 2023-12-21 18:04:50.914310172 +0000
	I1221 18:24:53.410698  103175 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1221 18:24:53.410707  103175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1221 18:24:53.426126  103175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1221 18:24:53.627485  103175 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1221 18:24:53.630837  103175 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1221 18:24:53.632865  103175 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1221 18:24:53.642600  103175 command_runner.go:130] > daemonset.apps/kindnet configured
	I1221 18:24:53.646796  103175 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:53.647023  103175 kapi.go:59] client config for multinode-186629: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:24:53.647311  103175 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1221 18:24:53.647324  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:53.647334  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:53.647342  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:53.649058  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:53.649076  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:53.649086  103175 round_trippers.go:580]     Audit-Id: 2d6a1d01-273a-46cf-90f6-cb882cbe7f68
	I1221 18:24:53.649093  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:53.649101  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:53.649117  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:53.649129  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:53.649141  103175 round_trippers.go:580]     Content-Length: 291
	I1221 18:24:53.649151  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:53 GMT
	I1221 18:24:53.649181  103175 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9b1538f2-152b-4663-899a-9076fafae97f","resourceVersion":"413","creationTimestamp":"2023-12-21T18:23:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1221 18:24:53.649290  103175 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-186629" context rescaled to 1 replicas
	I1221 18:24:53.649322  103175 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1221 18:24:53.651648  103175 out.go:177] * Verifying Kubernetes components...
	I1221 18:24:53.652927  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:24:53.663139  103175 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:24:53.663331  103175 kapi.go:59] client config for multinode-186629: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.crt", KeyFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/profiles/multinode-186629/client.key", CAFile:"/home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1f5c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1221 18:24:53.663558  103175 node_ready.go:35] waiting up to 6m0s for node "multinode-186629-m02" to be "Ready" ...
	I1221 18:24:53.663631  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629-m02
	I1221 18:24:53.663641  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:53.663649  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:53.663654  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:53.665535  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:53.665554  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:53.665563  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:53 GMT
	I1221 18:24:53.665572  103175 round_trippers.go:580]     Audit-Id: 503899f9-fd6b-4c06-a25c-3e854a207f4c
	I1221 18:24:53.665580  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:53.665596  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:53.665602  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:53.665607  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:53.665717  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629-m02","uid":"2c9dd99a-a552-4dcc-8ad5-16185c7c72f2","resourceVersion":"446","creationTimestamp":"2023-12-21T18:24:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_21T18_24_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1221 18:24:54.163899  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629-m02
	I1221 18:24:54.163923  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.163930  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.163936  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.166042  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:54.166058  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.166065  103175 round_trippers.go:580]     Audit-Id: 81d32488-35d9-4373-bac1-94bb770594ab
	I1221 18:24:54.166070  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.166075  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.166083  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.166089  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.166100  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.166257  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629-m02","uid":"2c9dd99a-a552-4dcc-8ad5-16185c7c72f2","resourceVersion":"446","creationTimestamp":"2023-12-21T18:24:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_21T18_24_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5653 chars]
	I1221 18:24:54.663850  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629-m02
	I1221 18:24:54.663870  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.663878  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.663884  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.666024  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:54.666041  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.666047  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.666053  103175 round_trippers.go:580]     Audit-Id: e1182e18-6d10-4aae-8b08-b64544132958
	I1221 18:24:54.666058  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.666063  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.666068  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.666073  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.666251  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629-m02","uid":"2c9dd99a-a552-4dcc-8ad5-16185c7c72f2","resourceVersion":"461","creationTimestamp":"2023-12-21T18:24:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_21T18_24_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1221 18:24:54.666555  103175 node_ready.go:49] node "multinode-186629-m02" has status "Ready":"True"
	I1221 18:24:54.666568  103175 node_ready.go:38] duration metric: took 1.002996658s waiting for node "multinode-186629-m02" to be "Ready" ...
	I1221 18:24:54.666577  103175 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:24:54.666627  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1221 18:24:54.666635  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.666642  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.666666  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.669708  103175 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1221 18:24:54.669725  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.669732  103175 round_trippers.go:580]     Audit-Id: fcf6799f-b197-43c5-88b3-ce141efd1236
	I1221 18:24:54.669752  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.669764  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.669777  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.669785  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.669791  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.670235  103175 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"409","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1221 18:24:54.673093  103175 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.673180  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rzjlp
	I1221 18:24:54.673190  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.673200  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.673214  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.674990  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.675008  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.675020  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.675041  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.675048  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.675059  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.675069  103175 round_trippers.go:580]     Audit-Id: 9238572b-6fd7-4d91-8670-3be90066a0d4
	I1221 18:24:54.675080  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.675249  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rzjlp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"49af9dec-b485-4ae7-b65f-f9ae56b041de","resourceVersion":"409","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"24b2136e-eceb-462b-8244-ee5c5130c4a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24b2136e-eceb-462b-8244-ee5c5130c4a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1221 18:24:54.675743  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:54.675757  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.675767  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.675777  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.677355  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.677369  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.677375  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.677380  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.677385  103175 round_trippers.go:580]     Audit-Id: a2a7ab75-a1ff-4b4f-b578-befd468f724d
	I1221 18:24:54.677390  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.677395  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.677400  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.677560  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:54.677832  103175 pod_ready.go:92] pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:54.677848  103175 pod_ready.go:81] duration metric: took 4.735434ms waiting for pod "coredns-5dd5756b68-rzjlp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.677856  103175 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.677910  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-186629
	I1221 18:24:54.677920  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.677930  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.677937  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.679586  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.679601  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.679610  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.679617  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.679625  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.679633  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.679643  103175 round_trippers.go:580]     Audit-Id: 387aebba-fbba-4412-8fe9-8f529af456b4
	I1221 18:24:54.679656  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.679745  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-186629","namespace":"kube-system","uid":"050d0edc-924f-43b4-ae37-c41be4b23abe","resourceVersion":"282","creationTimestamp":"2023-12-21T18:23:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8fc4208dd16cebfa046486404c6879d3","kubernetes.io/config.mirror":"8fc4208dd16cebfa046486404c6879d3","kubernetes.io/config.seen":"2023-12-21T18:23:49.133937645Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1221 18:24:54.680195  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:54.680216  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.680227  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.680237  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.681803  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.681820  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.681829  103175 round_trippers.go:580]     Audit-Id: c0a9a922-a2fe-4190-8c0b-adc474afc93c
	I1221 18:24:54.681838  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.681845  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.681859  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.681867  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.681875  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.681970  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:54.682249  103175 pod_ready.go:92] pod "etcd-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:54.682264  103175 pod_ready.go:81] duration metric: took 4.403162ms waiting for pod "etcd-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.682276  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.682315  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-186629
	I1221 18:24:54.682322  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.682329  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.682334  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.683883  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.683902  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.683912  103175 round_trippers.go:580]     Audit-Id: b0dc7d45-5297-42d9-8668-2a5e0db1aa5d
	I1221 18:24:54.683917  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.683922  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.683928  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.683936  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.683943  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.684049  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-186629","namespace":"kube-system","uid":"494ef2df-db06-45ea-89d9-d277b1915b9b","resourceVersion":"280","creationTimestamp":"2023-12-21T18:23:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"546ef8ac4384911117f3b86602f32ae5","kubernetes.io/config.mirror":"546ef8ac4384911117f3b86602f32ae5","kubernetes.io/config.seen":"2023-12-21T18:23:49.133939564Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1221 18:24:54.684430  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:54.684443  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.684449  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.684455  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.685858  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.685872  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.685879  103175 round_trippers.go:580]     Audit-Id: 2f709d03-2689-4061-ba89-ae3fb716b698
	I1221 18:24:54.685884  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.685889  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.685895  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.685900  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.685908  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.686052  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:54.686304  103175 pod_ready.go:92] pod "kube-apiserver-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:54.686315  103175 pod_ready.go:81] duration metric: took 4.033788ms waiting for pod "kube-apiserver-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.686323  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.686363  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-186629
	I1221 18:24:54.686371  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.686377  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.686383  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.688119  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.688134  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.688143  103175 round_trippers.go:580]     Audit-Id: a78a01dc-2eb9-4ee3-9bf4-0b24ed3c3f1e
	I1221 18:24:54.688151  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.688159  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.688168  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.688180  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.688193  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.688313  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-186629","namespace":"kube-system","uid":"327f27d3-7657-4072-a08d-b5ee04f8c570","resourceVersion":"274","creationTimestamp":"2023-12-21T18:23:55Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"934cc45a1b5ba86939f57849c5f23ab8","kubernetes.io/config.mirror":"934cc45a1b5ba86939f57849c5f23ab8","kubernetes.io/config.seen":"2023-12-21T18:23:54.990477588Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1221 18:24:54.688663  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:54.688676  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.688682  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.688688  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.690113  103175 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1221 18:24:54.690129  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.690137  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.690145  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.690153  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.690161  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.690176  103175 round_trippers.go:580]     Audit-Id: 24c5ea9d-cddd-4d31-89f5-c711d081deb3
	I1221 18:24:54.690189  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.690285  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:54.690543  103175 pod_ready.go:92] pod "kube-controller-manager-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:54.690558  103175 pod_ready.go:81] duration metric: took 4.228973ms waiting for pod "kube-controller-manager-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.690569  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qvrg" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:54.863867  103175 request.go:629] Waited for 173.242087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qvrg
	I1221 18:24:54.863921  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qvrg
	I1221 18:24:54.863926  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:54.863934  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:54.863940  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:54.866146  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:54.866167  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:54.866175  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:54.866182  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:54.866191  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:54.866201  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:54 GMT
	I1221 18:24:54.866213  103175 round_trippers.go:580]     Audit-Id: bd23876a-bf0f-463c-9b39-868472ec3f75
	I1221 18:24:54.866225  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:54.866393  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6qvrg","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e16cc1f-3273-4e29-a892-2a7fb65d8324","resourceVersion":"462","creationTimestamp":"2023-12-21T18:24:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"56989027-1f83-41ed-9e39-108798d50da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56989027-1f83-41ed-9e39-108798d50da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1221 18:24:55.064186  103175 request.go:629] Waited for 197.344344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-186629-m02
	I1221 18:24:55.064255  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629-m02
	I1221 18:24:55.064260  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:55.064268  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:55.064278  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:55.066380  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:55.066402  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:55.066411  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:55 GMT
	I1221 18:24:55.066419  103175 round_trippers.go:580]     Audit-Id: ef63dc9b-53b1-46b1-a096-4b9100b17e8b
	I1221 18:24:55.066426  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:55.066434  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:55.066441  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:55.066451  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:55.066550  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629-m02","uid":"2c9dd99a-a552-4dcc-8ad5-16185c7c72f2","resourceVersion":"461","creationTimestamp":"2023-12-21T18:24:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_21T18_24_53_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5728 chars]
	I1221 18:24:55.066852  103175 pod_ready.go:92] pod "kube-proxy-6qvrg" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:55.066868  103175 pod_ready.go:81] duration metric: took 376.292347ms waiting for pod "kube-proxy-6qvrg" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:55.066877  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sq9cp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:55.264813  103175 request.go:629] Waited for 197.870685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sq9cp
	I1221 18:24:55.264874  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sq9cp
	I1221 18:24:55.264881  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:55.264891  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:55.264899  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:55.267118  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:55.267134  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:55.267141  103175 round_trippers.go:580]     Audit-Id: 746c0994-aac2-4430-afa4-3bd037bd5a89
	I1221 18:24:55.267147  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:55.267152  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:55.267157  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:55.267165  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:55.267179  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:55 GMT
	I1221 18:24:55.267428  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sq9cp","generateName":"kube-proxy-","namespace":"kube-system","uid":"74302016-3be7-43b4-9909-8a256ce497b6","resourceVersion":"372","creationTimestamp":"2023-12-21T18:24:08Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"56989027-1f83-41ed-9e39-108798d50da3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:24:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56989027-1f83-41ed-9e39-108798d50da3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1221 18:24:55.464178  103175 request.go:629] Waited for 196.342009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:55.464269  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:55.464278  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:55.464285  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:55.464293  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:55.466486  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:55.466506  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:55.466515  103175 round_trippers.go:580]     Audit-Id: b75ca124-5ccf-41f1-8356-12988d1baf7f
	I1221 18:24:55.466530  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:55.466538  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:55.466546  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:55.466557  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:55.466570  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:55 GMT
	I1221 18:24:55.466669  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:55.466973  103175 pod_ready.go:92] pod "kube-proxy-sq9cp" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:55.466990  103175 pod_ready.go:81] duration metric: took 400.107351ms waiting for pod "kube-proxy-sq9cp" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:55.467003  103175 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:55.663946  103175 request.go:629] Waited for 196.881392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-186629
	I1221 18:24:55.664005  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-186629
	I1221 18:24:55.664013  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:55.664021  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:55.664030  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:55.666207  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:55.666231  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:55.666241  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:55 GMT
	I1221 18:24:55.666251  103175 round_trippers.go:580]     Audit-Id: add4c03d-191e-4f0d-bacc-662e504bd5fe
	I1221 18:24:55.666259  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:55.666268  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:55.666286  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:55.666298  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:55.666405  103175 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-186629","namespace":"kube-system","uid":"71e349f9-0a8d-43da-918d-917bbe11b7b1","resourceVersion":"281","creationTimestamp":"2023-12-21T18:23:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cff871542a279c784fc3936f791b252","kubernetes.io/config.mirror":"4cff871542a279c784fc3936f791b252","kubernetes.io/config.seen":"2023-12-21T18:23:49.133933095Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-21T18:23:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1221 18:24:55.864058  103175 request.go:629] Waited for 197.271815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:55.864149  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-186629
	I1221 18:24:55.864161  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:55.864172  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:55.864185  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:55.866307  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:55.866331  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:55.866341  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:55 GMT
	I1221 18:24:55.866350  103175 round_trippers.go:580]     Audit-Id: d012193a-027f-43ee-af21-6d797962b0b9
	I1221 18:24:55.866359  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:55.866369  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:55.866385  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:55.866394  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:55.866572  103175 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-21T18:23:52Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1221 18:24:55.866858  103175 pod_ready.go:92] pod "kube-scheduler-multinode-186629" in "kube-system" namespace has status "Ready":"True"
	I1221 18:24:55.866872  103175 pod_ready.go:81] duration metric: took 399.861728ms waiting for pod "kube-scheduler-multinode-186629" in "kube-system" namespace to be "Ready" ...
	I1221 18:24:55.866881  103175 pod_ready.go:38] duration metric: took 1.200293723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1221 18:24:55.866898  103175 system_svc.go:44] waiting for kubelet service to be running ....
	I1221 18:24:55.866938  103175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:24:55.877602  103175 system_svc.go:56] duration metric: took 10.69755ms WaitForService to wait for kubelet.
	I1221 18:24:55.877626  103175 kubeadm.go:581] duration metric: took 2.228273074s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1221 18:24:55.877648  103175 node_conditions.go:102] verifying NodePressure condition ...
	I1221 18:24:56.063978  103175 request.go:629] Waited for 186.259891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1221 18:24:56.064039  103175 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1221 18:24:56.064047  103175 round_trippers.go:469] Request Headers:
	I1221 18:24:56.064058  103175 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1221 18:24:56.064077  103175 round_trippers.go:473]     Accept: application/json, */*
	I1221 18:24:56.066677  103175 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1221 18:24:56.066700  103175 round_trippers.go:577] Response Headers:
	I1221 18:24:56.066709  103175 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d560ede8-8931-4643-92e9-0644896747d5
	I1221 18:24:56.066718  103175 round_trippers.go:580]     Date: Thu, 21 Dec 2023 18:24:56 GMT
	I1221 18:24:56.066727  103175 round_trippers.go:580]     Audit-Id: 3c7b0392-db69-45c4-9906-45c66eb199a1
	I1221 18:24:56.066735  103175 round_trippers.go:580]     Cache-Control: no-cache, private
	I1221 18:24:56.066749  103175 round_trippers.go:580]     Content-Type: application/json
	I1221 18:24:56.066757  103175 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d390444b-f4db-49db-9563-92eecb4a9df5
	I1221 18:24:56.067002  103175 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"467"},"items":[{"metadata":{"name":"multinode-186629","uid":"32ef583b-6693-4a3f-805f-2dd24b8761d3","resourceVersion":"390","creationTimestamp":"2023-12-21T18:23:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-186629","kubernetes.io/os":"linux","minikube.k8s.io/commit":"053db14b71765e8eac0607e1192d5903e3b3dcea","minikube.k8s.io/name":"multinode-186629","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_21T18_23_55_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12720 chars]
	I1221 18:24:56.067494  103175 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 18:24:56.067510  103175 node_conditions.go:123] node cpu capacity is 8
	I1221 18:24:56.067519  103175 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1221 18:24:56.067523  103175 node_conditions.go:123] node cpu capacity is 8
	I1221 18:24:56.067526  103175 node_conditions.go:105] duration metric: took 189.873428ms to run NodePressure ...
	I1221 18:24:56.067537  103175 start.go:228] waiting for startup goroutines ...
	I1221 18:24:56.067572  103175 start.go:242] writing updated cluster config ...
	I1221 18:24:56.067808  103175 ssh_runner.go:195] Run: rm -f paused
	I1221 18:24:56.112483  103175 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1221 18:24:56.115121  103175 out.go:177] * Done! kubectl is now configured to use "multinode-186629" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 21 18:24:39 multinode-186629 crio[957]: time="2023-12-21 18:24:39.822669422Z" level=info msg="Starting container: dd888317282449d9aa3f5a4e693ca91f0a10658f47777d39aa71d60da13c91ff" id=9c053170-34c2-4e5f-bcd5-b807ec5e1bea name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 18:24:39 multinode-186629 crio[957]: time="2023-12-21 18:24:39.823195380Z" level=info msg="Created container 94ef6311abb4c898aeab828719120002851c220bf534350af79586addc8732c2: kube-system/storage-provisioner/storage-provisioner" id=01453ecc-39cc-4e98-ae8c-b424bd928fc9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 18:24:39 multinode-186629 crio[957]: time="2023-12-21 18:24:39.823673334Z" level=info msg="Starting container: 94ef6311abb4c898aeab828719120002851c220bf534350af79586addc8732c2" id=9212aba0-8c00-4fd2-8e61-6dc5a916ceb6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 18:24:39 multinode-186629 crio[957]: time="2023-12-21 18:24:39.829257311Z" level=info msg="Started container" PID=2334 containerID=dd888317282449d9aa3f5a4e693ca91f0a10658f47777d39aa71d60da13c91ff description=kube-system/coredns-5dd5756b68-rzjlp/coredns id=9c053170-34c2-4e5f-bcd5-b807ec5e1bea name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c31cd19fd6f84a7795960e82738cc935420b5d9acb28cced0a0ed0cfb5bdcc0
	Dec 21 18:24:39 multinode-186629 crio[957]: time="2023-12-21 18:24:39.829763040Z" level=info msg="Started container" PID=2335 containerID=94ef6311abb4c898aeab828719120002851c220bf534350af79586addc8732c2 description=kube-system/storage-provisioner/storage-provisioner id=9212aba0-8c00-4fd2-8e61-6dc5a916ceb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d46b070f08d39d59883a8bc6f6a2d05fcfcae49fa3e05eff30839326a4aff9c
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.096133865Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-qq9gx/POD" id=2cf2a2a2-75d9-4585-a286-4a6668cfdee4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.096209115Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.108057554Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-qq9gx Namespace:default ID:ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459 UID:8979d378-8059-48bf-b5bc-4523a77ce4e5 NetNS:/var/run/netns/2e65dd23-236d-455b-9221-0b8849ec2c99 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.108089571Z" level=info msg="Adding pod default_busybox-5bc68d56bd-qq9gx to CNI network \"kindnet\" (type=ptp)"
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.115871470Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-qq9gx Namespace:default ID:ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459 UID:8979d378-8059-48bf-b5bc-4523a77ce4e5 NetNS:/var/run/netns/2e65dd23-236d-455b-9221-0b8849ec2c99 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.115971331Z" level=info msg="Checking pod default_busybox-5bc68d56bd-qq9gx for CNI network kindnet (type=ptp)"
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.132250882Z" level=info msg="Ran pod sandbox ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459 with infra container: default/busybox-5bc68d56bd-qq9gx/POD" id=2cf2a2a2-75d9-4585-a286-4a6668cfdee4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.133223487Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=4ed7662e-bb5d-42be-89fa-8daff26ebeac name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.133483471Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=4ed7662e-bb5d-42be-89fa-8daff26ebeac name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.134280581Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=adabd87c-516c-4899-9d6c-370ce417353a name=/runtime.v1.ImageService/PullImage
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.136344203Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 21 18:24:57 multinode-186629 crio[957]: time="2023-12-21 18:24:57.873415749Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.599431333Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=adabd87c-516c-4899-9d6c-370ce417353a name=/runtime.v1.ImageService/PullImage
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.600421480Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ccd882ea-c694-4650-b4f5-d418afb3f5ce name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.600995960Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ccd882ea-c694-4650-b4f5-d418afb3f5ce name=/runtime.v1.ImageService/ImageStatus
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.601710914Z" level=info msg="Creating container: default/busybox-5bc68d56bd-qq9gx/busybox" id=2674f7df-b64e-4046-912d-109433066f3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.601796200Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.668192261Z" level=info msg="Created container b86ef6752f46a273d07392a7371746f2d82209e1cb63821f356f1364f24d6a84: default/busybox-5bc68d56bd-qq9gx/busybox" id=2674f7df-b64e-4046-912d-109433066f3c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.668740744Z" level=info msg="Starting container: b86ef6752f46a273d07392a7371746f2d82209e1cb63821f356f1364f24d6a84" id=fd8c9c5e-f55c-4add-98ac-f23b60400887 name=/runtime.v1.RuntimeService/StartContainer
	Dec 21 18:24:59 multinode-186629 crio[957]: time="2023-12-21 18:24:59.674739645Z" level=info msg="Started container" PID=2516 containerID=b86ef6752f46a273d07392a7371746f2d82209e1cb63821f356f1364f24d6a84 description=default/busybox-5bc68d56bd-qq9gx/busybox id=fd8c9c5e-f55c-4add-98ac-f23b60400887 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b86ef6752f46a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   ffd0c52c08eab       busybox-5bc68d56bd-qq9gx
	dd88831728244       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      23 seconds ago       Running             coredns                   0                   4c31cd19fd6f8       coredns-5dd5756b68-rzjlp
	94ef6311abb4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      23 seconds ago       Running             storage-provisioner       0                   5d46b070f08d3       storage-provisioner
	7470b25c8efef       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      55 seconds ago       Running             kindnet-cni               0                   87dc5c23210af       kindnet-w2nh9
	b02aa4f2680ec       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      55 seconds ago       Running             kube-proxy                0                   cea24404e4b8c       kube-proxy-sq9cp
	cc6af1b7ee812       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   4313dc1c872cc       etcd-multinode-186629
	0713d2e5fb73a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   c5ce77943a8ef       kube-scheduler-multinode-186629
	541db610916bf       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   9861b5e5bea95       kube-controller-manager-multinode-186629
	719ce49243f4b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   3e9f081be1ee2       kube-apiserver-multinode-186629
	
	
	==> coredns [dd888317282449d9aa3f5a4e693ca91f0a10658f47777d39aa71d60da13c91ff] <==
	[INFO] 10.244.1.2:58994 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010518s
	[INFO] 10.244.0.3:59293 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085682s
	[INFO] 10.244.0.3:49613 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001427101s
	[INFO] 10.244.0.3:54901 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076848s
	[INFO] 10.244.0.3:35185 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051511s
	[INFO] 10.244.0.3:41987 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001024066s
	[INFO] 10.244.0.3:47046 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000051313s
	[INFO] 10.244.0.3:36634 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053003s
	[INFO] 10.244.0.3:59385 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051503s
	[INFO] 10.244.1.2:54027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140354s
	[INFO] 10.244.1.2:38701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085076s
	[INFO] 10.244.1.2:45835 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073767s
	[INFO] 10.244.1.2:55689 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067748s
	[INFO] 10.244.0.3:45576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118323s
	[INFO] 10.244.0.3:35452 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066624s
	[INFO] 10.244.0.3:51290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079406s
	[INFO] 10.244.0.3:49385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053948s
	[INFO] 10.244.1.2:36876 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104401s
	[INFO] 10.244.1.2:40359 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125745s
	[INFO] 10.244.1.2:34964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112712s
	[INFO] 10.244.1.2:60847 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090471s
	[INFO] 10.244.0.3:33110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096363s
	[INFO] 10.244.0.3:44746 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064457s
	[INFO] 10.244.0.3:47580 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000054395s
	[INFO] 10.244.0.3:52553 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005477s
	
	
	==> describe nodes <==
	Name:               multinode-186629
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-186629
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=multinode-186629
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_21T18_23_55_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:23:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-186629
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:24:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:24:39 +0000   Thu, 21 Dec 2023 18:23:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:24:39 +0000   Thu, 21 Dec 2023 18:23:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:24:39 +0000   Thu, 21 Dec 2023 18:23:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:24:39 +0000   Thu, 21 Dec 2023 18:24:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-186629
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 387eb1e34abc4f82bf80796531801246
	  System UUID:                4cb0a28b-7c02-4edb-91b2-962b0bdff9e2
	  Boot ID:                    d99d8f8f-1497-48b1-8406-284c1d2cae5c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-qq9gx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-rzjlp                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     55s
	  kube-system                 etcd-multinode-186629                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-w2nh9                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      55s
	  kube-system                 kube-apiserver-multinode-186629             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-multinode-186629    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-sq9cp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-scheduler-multinode-186629             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 69s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s   kubelet          Node multinode-186629 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s   kubelet          Node multinode-186629 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s   kubelet          Node multinode-186629 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node multinode-186629 event: Registered Node multinode-186629 in Controller
	  Normal  NodeReady                24s   kubelet          Node multinode-186629 status is now: NodeReady
	
	
	Name:               multinode-186629-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-186629-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=053db14b71765e8eac0607e1192d5903e3b3dcea
	                    minikube.k8s.io/name=multinode-186629
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_21T18_24_53_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Dec 2023 18:24:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-186629-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Dec 2023 18:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Dec 2023 18:24:54 +0000   Thu, 21 Dec 2023 18:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Dec 2023 18:24:54 +0000   Thu, 21 Dec 2023 18:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Dec 2023 18:24:54 +0000   Thu, 21 Dec 2023 18:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Dec 2023 18:24:54 +0000   Thu, 21 Dec 2023 18:24:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-186629-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859428Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d40911b4b53463aa1d303979eb9276d
	  System UUID:                3faa8d03-6838-4bdc-82ca-bfa4d5bdafa1
	  Boot ID:                    d99d8f8f-1497-48b1-8406-284c1d2cae5c
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pvfqq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-5zf8j               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-6qvrg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 12s)  kubelet          Node multinode-186629-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 12s)  kubelet          Node multinode-186629-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 12s)  kubelet          Node multinode-186629-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-186629-m02 status is now: NodeReady
	  Normal  RegisteredNode           6s                 node-controller  Node multinode-186629-m02 event: Registered Node multinode-186629-m02 in Controller
	
	
	==> dmesg <==
	[  +0.004939] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006565] FS-Cache: N-cookie d=00000000ac11eb8c{9p.inode} n=00000000ae093f63
	[  +0.008723] FS-Cache: N-key=[8] '85a00f0200000000'
	[  +2.570987] FS-Cache: Duplicate cookie detected
	[  +0.004718] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=0000000044b78208{9P.session} n=000000008e746005
	[  +0.007517] FS-Cache: O-key=[10] '34323935373333333334'
	[  +0.005348] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006584] FS-Cache: N-cookie d=0000000044b78208{9P.session} n=00000000cbd55615
	[  +0.008899] FS-Cache: N-key=[10] '34323935373333333334'
	[Dec21 18:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec21 18:16] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +1.028201] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +2.015857] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +4.223691] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[  +8.191334] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[Dec21 18:17] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	[ +34.045433] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 26 08 12 61 f9 1a 92 cd c6 b7 a9 82 08 00
	
	
	==> etcd [cc6af1b7ee812455f1f2bb694ecda3be467d6172e79f1f4b0cc19d865b37a50a] <==
	{"level":"info","ts":"2023-12-21T18:23:49.886575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-21T18:23:49.886784Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-21T18:23:49.887136Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-21T18:23:49.887321Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-21T18:23:49.887361Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-21T18:23:49.88745Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-21T18:23:49.887483Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-21T18:23:50.117068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-21T18:23:50.117159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-21T18:23:50.117187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-21T18:23:50.117206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-21T18:23:50.117213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-21T18:23:50.117226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-21T18:23:50.117256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-21T18:23:50.118147Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:23:50.118807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-21T18:23:50.118806Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-186629 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-21T18:23:50.118839Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-21T18:23:50.119061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-21T18:23:50.119106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:23:50.119262Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:23:50.119324Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-21T18:23:50.119126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-21T18:23:50.120035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-21T18:23:50.120235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:25:03 up  1:07,  0 users,  load average: 0.80, 1.04, 0.73
	Linux multinode-186629 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [7470b25c8efefb52536ff6b4487e9dd16985ffa08ea0d3ee80aadf7375ac701e] <==
	I1221 18:24:08.987196       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1221 18:24:08.987256       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1221 18:24:08.987414       1 main.go:116] setting mtu 1500 for CNI 
	I1221 18:24:08.987434       1 main.go:146] kindnetd IP family: "ipv4"
	I1221 18:24:08.987451       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1221 18:24:39.218521       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1221 18:24:39.226697       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1221 18:24:39.226721       1 main.go:227] handling current node
	I1221 18:24:49.240563       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1221 18:24:49.240591       1 main.go:227] handling current node
	I1221 18:24:59.245556       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1221 18:24:59.245590       1 main.go:227] handling current node
	I1221 18:24:59.245600       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1221 18:24:59.245605       1 main.go:250] Node multinode-186629-m02 has CIDR [10.244.1.0/24] 
	I1221 18:24:59.245761       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [719ce49243f4b9c64b09331910fea82947609dbc0a0b5e73742a8c2e553b99c9] <==
	I1221 18:23:52.386932       1 autoregister_controller.go:141] Starting autoregister controller
	I1221 18:23:52.386995       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1221 18:23:52.387030       1 cache.go:39] Caches are synced for autoregister controller
	I1221 18:23:52.386314       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1221 18:23:52.387281       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1221 18:23:52.386594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1221 18:23:52.386854       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1221 18:23:52.387587       1 controller.go:624] quota admission added evaluator for: namespaces
	E1221 18:23:52.388922       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1221 18:23:52.592208       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1221 18:23:53.222538       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1221 18:23:53.225721       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1221 18:23:53.225739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1221 18:23:53.574442       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1221 18:23:53.605108       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1221 18:23:53.699277       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1221 18:23:53.707984       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1221 18:23:53.709130       1 controller.go:624] quota admission added evaluator for: endpoints
	I1221 18:23:53.715112       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1221 18:23:54.316779       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1221 18:23:54.938387       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1221 18:23:54.947010       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1221 18:23:54.955347       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1221 18:24:07.722280       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1221 18:24:08.093910       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [541db610916bf71e70ff3f6b8b8929e321e12ee4da19130bc929b981f9a6ee5c] <==
	I1221 18:24:08.387078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.788µs"
	I1221 18:24:39.419194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.327µs"
	I1221 18:24:39.435509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.085µs"
	I1221 18:24:40.163929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.356µs"
	I1221 18:24:40.187641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.134243ms"
	I1221 18:24:40.187789       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.974µs"
	I1221 18:24:42.183374       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1221 18:24:52.973309       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-186629-m02\" does not exist"
	I1221 18:24:52.979721       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-186629-m02" podCIDRs=["10.244.1.0/24"]
	I1221 18:24:52.982233       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6qvrg"
	I1221 18:24:52.982329       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5zf8j"
	I1221 18:24:54.419714       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-186629-m02"
	I1221 18:24:56.759738       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1221 18:24:56.782019       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pvfqq"
	I1221 18:24:56.786624       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-qq9gx"
	I1221 18:24:56.793337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.745834ms"
	I1221 18:24:56.804809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.42147ms"
	I1221 18:24:56.814136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.200973ms"
	I1221 18:24:56.814229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.873µs"
	I1221 18:24:57.185768       1 event.go:307] "Event occurred" object="multinode-186629-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-186629-m02 event: Registered Node multinode-186629-m02 in Controller"
	I1221 18:24:57.185837       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-186629-m02"
	I1221 18:25:00.202252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.780635ms"
	I1221 18:25:00.202351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.084µs"
	I1221 18:25:00.509710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.846653ms"
	I1221 18:25:00.509801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.935µs"
	
	
	==> kube-proxy [b02aa4f2680ec40136fc5e102dc93b6bb623421893b66576335cfec5afed7b00] <==
	I1221 18:24:08.916960       1 server_others.go:69] "Using iptables proxy"
	I1221 18:24:08.926756       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1221 18:24:09.003575       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1221 18:24:09.005569       1 server_others.go:152] "Using iptables Proxier"
	I1221 18:24:09.005600       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1221 18:24:09.005608       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1221 18:24:09.005646       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1221 18:24:09.005901       1 server.go:846] "Version info" version="v1.28.4"
	I1221 18:24:09.005915       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1221 18:24:09.008280       1 config.go:315] "Starting node config controller"
	I1221 18:24:09.008341       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1221 18:24:09.008569       1 config.go:188] "Starting service config controller"
	I1221 18:24:09.008586       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1221 18:24:09.008604       1 config.go:97] "Starting endpoint slice config controller"
	I1221 18:24:09.008609       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1221 18:24:09.108657       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1221 18:24:09.108712       1 shared_informer.go:318] Caches are synced for service config
	I1221 18:24:09.108928       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0713d2e5fb73ac0d8fbecaa09a33022298bd44e5ff8c4bacb75bcd5bb8221494] <==
	W1221 18:23:52.393387       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:52.395411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:52.393449       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:52.395442       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:52.393467       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:52.395460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:52.393558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:23:52.395478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1221 18:23:52.393560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:52.395495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:52.393637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1221 18:23:52.395512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1221 18:23:52.393654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1221 18:23:52.395543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1221 18:23:52.395332       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1221 18:23:52.395566       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1221 18:23:53.249280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:53.249315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:53.315998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:53.316038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1221 18:23:53.357248       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1221 18:23:53.357289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1221 18:23:53.378514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1221 18:23:53.378546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1221 18:23:53.910750       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.286932    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74302016-3be7-43b4-9909-8a256ce497b6-lib-modules\") pod \"kube-proxy-sq9cp\" (UID: \"74302016-3be7-43b4-9909-8a256ce497b6\") " pod="kube-system/kube-proxy-sq9cp"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.287011    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jtft\" (UniqueName: \"kubernetes.io/projected/74302016-3be7-43b4-9909-8a256ce497b6-kube-api-access-2jtft\") pod \"kube-proxy-sq9cp\" (UID: \"74302016-3be7-43b4-9909-8a256ce497b6\") " pod="kube-system/kube-proxy-sq9cp"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.287161    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp57v\" (UniqueName: \"kubernetes.io/projected/731e5a37-9d18-4cee-b269-127e4ad9c8cf-kube-api-access-kp57v\") pod \"kindnet-w2nh9\" (UID: \"731e5a37-9d18-4cee-b269-127e4ad9c8cf\") " pod="kube-system/kindnet-w2nh9"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.287210    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74302016-3be7-43b4-9909-8a256ce497b6-xtables-lock\") pod \"kube-proxy-sq9cp\" (UID: \"74302016-3be7-43b4-9909-8a256ce497b6\") " pod="kube-system/kube-proxy-sq9cp"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.287237    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/731e5a37-9d18-4cee-b269-127e4ad9c8cf-xtables-lock\") pod \"kindnet-w2nh9\" (UID: \"731e5a37-9d18-4cee-b269-127e4ad9c8cf\") " pod="kube-system/kindnet-w2nh9"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: I1221 18:24:08.287292    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/731e5a37-9d18-4cee-b269-127e4ad9c8cf-lib-modules\") pod \"kindnet-w2nh9\" (UID: \"731e5a37-9d18-4cee-b269-127e4ad9c8cf\") " pod="kube-system/kindnet-w2nh9"
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: W1221 18:24:08.519996    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio-cea24404e4b8c8a6a6fda28e153344eac546d964d72bc82ec1323e22d96517ad WatchSource:0}: Error finding container cea24404e4b8c8a6a6fda28e153344eac546d964d72bc82ec1323e22d96517ad: Status 404 returned error can't find the container with id cea24404e4b8c8a6a6fda28e153344eac546d964d72bc82ec1323e22d96517ad
	Dec 21 18:24:08 multinode-186629 kubelet[1591]: W1221 18:24:08.520355    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio-87dc5c23210affef6f69bf6b2c10557436e17e19f77377c7dcab54e117d57ded WatchSource:0}: Error finding container 87dc5c23210affef6f69bf6b2c10557436e17e19f77377c7dcab54e117d57ded: Status 404 returned error can't find the container with id 87dc5c23210affef6f69bf6b2c10557436e17e19f77377c7dcab54e117d57ded
	Dec 21 18:24:09 multinode-186629 kubelet[1591]: I1221 18:24:09.120957    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sq9cp" podStartSLOduration=1.120905519 podCreationTimestamp="2023-12-21 18:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-21 18:24:09.12064129 +0000 UTC m=+14.204217637" watchObservedRunningTime="2023-12-21 18:24:09.120905519 +0000 UTC m=+14.204481865"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.395884    1591 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.418735    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w2nh9" podStartSLOduration=31.418681633 podCreationTimestamp="2023-12-21 18:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-21 18:24:09.134448508 +0000 UTC m=+14.218024856" watchObservedRunningTime="2023-12-21 18:24:39.418681633 +0000 UTC m=+44.502257981"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.419154    1591 topology_manager.go:215] "Topology Admit Handler" podUID="e410c9c3-aca6-4eb6-9186-d00fa92f6cb0" podNamespace="kube-system" podName="storage-provisioner"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.419334    1591 topology_manager.go:215] "Topology Admit Handler" podUID="49af9dec-b485-4ae7-b65f-f9ae56b041de" podNamespace="kube-system" podName="coredns-5dd5756b68-rzjlp"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.561286    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e410c9c3-aca6-4eb6-9186-d00fa92f6cb0-tmp\") pod \"storage-provisioner\" (UID: \"e410c9c3-aca6-4eb6-9186-d00fa92f6cb0\") " pod="kube-system/storage-provisioner"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.561355    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49af9dec-b485-4ae7-b65f-f9ae56b041de-config-volume\") pod \"coredns-5dd5756b68-rzjlp\" (UID: \"49af9dec-b485-4ae7-b65f-f9ae56b041de\") " pod="kube-system/coredns-5dd5756b68-rzjlp"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.561457    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5lk\" (UniqueName: \"kubernetes.io/projected/e410c9c3-aca6-4eb6-9186-d00fa92f6cb0-kube-api-access-mx5lk\") pod \"storage-provisioner\" (UID: \"e410c9c3-aca6-4eb6-9186-d00fa92f6cb0\") " pod="kube-system/storage-provisioner"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: I1221 18:24:39.561498    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ngl\" (UniqueName: \"kubernetes.io/projected/49af9dec-b485-4ae7-b65f-f9ae56b041de-kube-api-access-l6ngl\") pod \"coredns-5dd5756b68-rzjlp\" (UID: \"49af9dec-b485-4ae7-b65f-f9ae56b041de\") " pod="kube-system/coredns-5dd5756b68-rzjlp"
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: W1221 18:24:39.766164    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio-5d46b070f08d39d59883a8bc6f6a2d05fcfcae49fa3e05eff30839326a4aff9c WatchSource:0}: Error finding container 5d46b070f08d39d59883a8bc6f6a2d05fcfcae49fa3e05eff30839326a4aff9c: Status 404 returned error can't find the container with id 5d46b070f08d39d59883a8bc6f6a2d05fcfcae49fa3e05eff30839326a4aff9c
	Dec 21 18:24:39 multinode-186629 kubelet[1591]: W1221 18:24:39.766554    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio-4c31cd19fd6f84a7795960e82738cc935420b5d9acb28cced0a0ed0cfb5bdcc0 WatchSource:0}: Error finding container 4c31cd19fd6f84a7795960e82738cc935420b5d9acb28cced0a0ed0cfb5bdcc0: Status 404 returned error can't find the container with id 4c31cd19fd6f84a7795960e82738cc935420b5d9acb28cced0a0ed0cfb5bdcc0
	Dec 21 18:24:40 multinode-186629 kubelet[1591]: I1221 18:24:40.163906    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rzjlp" podStartSLOduration=32.163854619 podCreationTimestamp="2023-12-21 18:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-21 18:24:40.163669181 +0000 UTC m=+45.247245527" watchObservedRunningTime="2023-12-21 18:24:40.163854619 +0000 UTC m=+45.247430967"
	Dec 21 18:24:40 multinode-186629 kubelet[1591]: I1221 18:24:40.172030    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.171987644 podCreationTimestamp="2023-12-21 18:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-21 18:24:40.171850847 +0000 UTC m=+45.255427194" watchObservedRunningTime="2023-12-21 18:24:40.171987644 +0000 UTC m=+45.255563992"
	Dec 21 18:24:56 multinode-186629 kubelet[1591]: I1221 18:24:56.794015    1591 topology_manager.go:215] "Topology Admit Handler" podUID="8979d378-8059-48bf-b5bc-4523a77ce4e5" podNamespace="default" podName="busybox-5bc68d56bd-qq9gx"
	Dec 21 18:24:56 multinode-186629 kubelet[1591]: I1221 18:24:56.970091    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qhw\" (UniqueName: \"kubernetes.io/projected/8979d378-8059-48bf-b5bc-4523a77ce4e5-kube-api-access-82qhw\") pod \"busybox-5bc68d56bd-qq9gx\" (UID: \"8979d378-8059-48bf-b5bc-4523a77ce4e5\") " pod="default/busybox-5bc68d56bd-qq9gx"
	Dec 21 18:24:57 multinode-186629 kubelet[1591]: W1221 18:24:57.129983    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio-ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459 WatchSource:0}: Error finding container ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459: Status 404 returned error can't find the container with id ffd0c52c08eabbcb5bb9c644c0f462181f1b0c421b695f287218c1fe732d5459
	Dec 21 18:25:00 multinode-186629 kubelet[1591]: I1221 18:25:00.198505    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-qq9gx" podStartSLOduration=1.732225325 podCreationTimestamp="2023-12-21 18:24:56 +0000 UTC" firstStartedPulling="2023-12-21 18:24:57.133696071 +0000 UTC m=+62.217272412" lastFinishedPulling="2023-12-21 18:24:59.599929271 +0000 UTC m=+64.683505600" observedRunningTime="2023-12-21 18:25:00.198233504 +0000 UTC m=+65.281809851" watchObservedRunningTime="2023-12-21 18:25:00.198458513 +0000 UTC m=+65.282034859"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-186629 -n multinode-186629
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-186629 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.1612945723.exe start -p running-upgrade-771629 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.1612945723.exe start -p running-upgrade-771629 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.384015294s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-771629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-771629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.62432511s)

                                                
                                                
-- stdout --
	* [running-upgrade-771629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-771629 in cluster running-upgrade-771629
	* Pulling base image v0.0.42-1702920864-17822 ...
	* Updating the running docker "running-upgrade-771629" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:38:47.120837  209325 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:38:47.121089  209325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:38:47.121099  209325 out.go:309] Setting ErrFile to fd 2...
	I1221 18:38:47.121106  209325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:38:47.121346  209325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:38:47.121937  209325 out.go:303] Setting JSON to false
	I1221 18:38:47.123780  209325 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4874,"bootTime":1703179053,"procs":946,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:38:47.123847  209325 start.go:138] virtualization: kvm guest
	I1221 18:38:47.125852  209325 out.go:177] * [running-upgrade-771629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:38:47.127422  209325 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:38:47.127426  209325 notify.go:220] Checking for updates...
	I1221 18:38:47.128846  209325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:38:47.130228  209325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:38:47.131740  209325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:38:47.133107  209325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:38:47.134556  209325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:38:47.136201  209325 config.go:182] Loaded profile config "running-upgrade-771629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1221 18:38:47.136221  209325 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:38:47.137897  209325 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1221 18:38:47.139061  209325 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:38:47.161638  209325 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:38:47.161717  209325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:38:47.230246  209325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2023-12-21 18:38:47.221289279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:38:47.230344  209325 docker.go:295] overlay module found
	I1221 18:38:47.231947  209325 out.go:177] * Using the docker driver based on existing profile
	I1221 18:38:47.233352  209325 start.go:298] selected driver: docker
	I1221 18:38:47.233372  209325 start.go:902] validating driver "docker" against &{Name:running-upgrade-771629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-771629 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1221 18:38:47.233461  209325 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:38:47.234495  209325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:38:47.296196  209325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2023-12-21 18:38:47.286841259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:38:47.296528  209325 cni.go:84] Creating CNI manager for ""
	I1221 18:38:47.296556  209325 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1221 18:38:47.296571  209325 start_flags.go:323] config:
	{Name:running-upgrade-771629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-771629 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1221 18:38:47.298236  209325 out.go:177] * Starting control plane node running-upgrade-771629 in cluster running-upgrade-771629
	I1221 18:38:47.299462  209325 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:38:47.300795  209325 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:38:47.302000  209325 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1221 18:38:47.302029  209325 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:38:47.319396  209325 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:38:47.319418  209325 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	W1221 18:38:47.704879  209325 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1221 18:38:47.705046  209325 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/running-upgrade-771629/config.json ...
	I1221 18:38:47.705157  209325 cache.go:107] acquiring lock: {Name:mka0839cb6a3f374936e75c7f60eb393966a0ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705295  209325 cache.go:107] acquiring lock: {Name:mk2a91ea4b400647b716033dd6c94fef47eb3aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705318  209325 cache.go:107] acquiring lock: {Name:mk298e300c1afe6bcd880c61e19d7e46d34c4985 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705356  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1221 18:38:47.705382  209325 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 240.504µs
	I1221 18:38:47.705404  209325 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1221 18:38:47.705400  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1221 18:38:47.705421  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1221 18:38:47.705153  209325 cache.go:107] acquiring lock: {Name:mka5186396ae59f86f278a99a6c1581765c698cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705398  209325 cache.go:107] acquiring lock: {Name:mkb4b08f64d808704a03ba98d01ca6d357c24642 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705435  209325 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 247.351µs
	I1221 18:38:47.705422  209325 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 130.281µs
	I1221 18:38:47.705457  209325 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1221 18:38:47.705410  209325 cache.go:107] acquiring lock: {Name:mkde32120d37536ecf720d817c914de2664c90ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705462  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1221 18:38:47.705473  209325 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 344.737µs
	I1221 18:38:47.705482  209325 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1221 18:38:47.705464  209325 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1221 18:38:47.705355  209325 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:38:47.705156  209325 cache.go:107] acquiring lock: {Name:mk7f5b252037049d30335d827e13ab3cd86c6362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705463  209325 cache.go:107] acquiring lock: {Name:mkaacb473626ac5cfd28e9f25007033f9d726120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705540  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1221 18:38:47.705551  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1221 18:38:47.705555  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1221 18:38:47.705560  209325 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 187.068µs
	I1221 18:38:47.705519  209325 start.go:365] acquiring machines lock for running-upgrade-771629: {Name:mk8c26fdd093a1285ff3f00d15c1bd39c46fa014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:38:47.705573  209325 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1221 18:38:47.705554  209325 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 197.83µs
	I1221 18:38:47.705595  209325 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1221 18:38:47.705573  209325 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 429.473µs
	I1221 18:38:47.705608  209325 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1221 18:38:47.705608  209325 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1221 18:38:47.705630  209325 start.go:369] acquired machines lock for "running-upgrade-771629" in 53.299µs
	I1221 18:38:47.705650  209325 start.go:96] Skipping create...Using existing machine configuration
	I1221 18:38:47.705660  209325 fix.go:54] fixHost starting: m01
	I1221 18:38:47.705627  209325 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 215.969µs
	I1221 18:38:47.705730  209325 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1221 18:38:47.705740  209325 cache.go:87] Successfully saved all images to host disk.
	I1221 18:38:47.705911  209325 cli_runner.go:164] Run: docker container inspect running-upgrade-771629 --format={{.State.Status}}
	I1221 18:38:47.724330  209325 fix.go:102] recreateIfNeeded on running-upgrade-771629: state=Running err=<nil>
	W1221 18:38:47.724361  209325 fix.go:128] unexpected machine state, will restart: <nil>
	I1221 18:38:47.726283  209325 out.go:177] * Updating the running docker "running-upgrade-771629" container ...
	I1221 18:38:47.727599  209325 machine.go:88] provisioning docker machine ...
	I1221 18:38:47.727629  209325 ubuntu.go:169] provisioning hostname "running-upgrade-771629"
	I1221 18:38:47.727687  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:47.744886  209325 main.go:141] libmachine: Using SSH client type: native
	I1221 18:38:47.745283  209325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1221 18:38:47.745298  209325 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-771629 && echo "running-upgrade-771629" | sudo tee /etc/hostname
	I1221 18:38:47.861199  209325 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-771629
	
	I1221 18:38:47.861298  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:47.878079  209325 main.go:141] libmachine: Using SSH client type: native
	I1221 18:38:47.878603  209325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1221 18:38:47.878632  209325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-771629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-771629/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-771629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:38:47.990206  209325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:38:47.990236  209325 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:38:47.990283  209325 ubuntu.go:177] setting up certificates
	I1221 18:38:47.990300  209325 provision.go:83] configureAuth start
	I1221 18:38:47.990357  209325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-771629
	I1221 18:38:48.009760  209325 provision.go:138] copyHostCerts
	I1221 18:38:48.009815  209325 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem, removing ...
	I1221 18:38:48.009827  209325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:38:48.009881  209325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:38:48.009987  209325 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem, removing ...
	I1221 18:38:48.010000  209325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:38:48.010036  209325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:38:48.010109  209325 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem, removing ...
	I1221 18:38:48.010116  209325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:38:48.010141  209325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:38:48.010200  209325 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-771629 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-771629]
	I1221 18:38:48.245680  209325 provision.go:172] copyRemoteCerts
	I1221 18:38:48.245734  209325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:38:48.245769  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:48.265191  209325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/running-upgrade-771629/id_rsa Username:docker}
	I1221 18:38:48.348256  209325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1221 18:38:48.367089  209325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1221 18:38:48.386124  209325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:38:48.411248  209325 provision.go:86] duration metric: configureAuth took 420.928857ms
	I1221 18:38:48.411284  209325 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:38:48.411519  209325 config.go:182] Loaded profile config "running-upgrade-771629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1221 18:38:48.411647  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:48.431115  209325 main.go:141] libmachine: Using SSH client type: native
	I1221 18:38:48.431582  209325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I1221 18:38:48.431614  209325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:38:48.885619  209325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:38:48.885647  209325 machine.go:91] provisioned docker machine in 1.15803246s
	I1221 18:38:48.885660  209325 start.go:300] post-start starting for "running-upgrade-771629" (driver="docker")
	I1221 18:38:48.885673  209325 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:38:48.885737  209325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:38:48.885781  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:48.903967  209325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/running-upgrade-771629/id_rsa Username:docker}
	I1221 18:38:48.984668  209325 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:38:48.987573  209325 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:38:48.987601  209325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:38:48.987615  209325 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:38:48.987623  209325 info.go:137] Remote host: Ubuntu 19.10
	I1221 18:38:48.987635  209325 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:38:48.987691  209325 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:38:48.987782  209325 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> 166642.pem in /etc/ssl/certs
	I1221 18:38:48.987896  209325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:38:48.995211  209325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:38:49.011258  209325 start.go:303] post-start completed in 125.583617ms
	I1221 18:38:49.011326  209325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:38:49.011377  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:49.028527  209325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/running-upgrade-771629/id_rsa Username:docker}
	I1221 18:38:49.109514  209325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:38:49.113356  209325 fix.go:56] fixHost completed within 1.407691724s
	I1221 18:38:49.113379  209325 start.go:83] releasing machines lock for "running-upgrade-771629", held for 1.407734629s
	I1221 18:38:49.113441  209325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-771629
	I1221 18:38:49.130429  209325 ssh_runner.go:195] Run: cat /version.json
	I1221 18:38:49.130479  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:49.130536  209325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:38:49.130598  209325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-771629
	I1221 18:38:49.148611  209325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/running-upgrade-771629/id_rsa Username:docker}
	I1221 18:38:49.149192  209325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/running-upgrade-771629/id_rsa Username:docker}
	W1221 18:38:49.224017  209325 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1221 18:38:49.224099  209325 ssh_runner.go:195] Run: systemctl --version
	I1221 18:38:49.254505  209325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:38:49.305701  209325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:38:49.309639  209325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:38:49.324574  209325 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:38:49.324650  209325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:38:49.344611  209325 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 18:38:49.344632  209325 start.go:475] detecting cgroup driver to use...
	I1221 18:38:49.344664  209325 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:38:49.344729  209325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:38:49.363992  209325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:38:49.372676  209325 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:38:49.372742  209325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:38:49.381140  209325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:38:49.389760  209325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1221 18:38:49.397921  209325 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1221 18:38:49.397966  209325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:38:49.474668  209325 docker.go:219] disabling docker service ...
	I1221 18:38:49.474729  209325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:38:49.484146  209325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:38:49.492983  209325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:38:49.568791  209325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:38:49.646398  209325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:38:49.656001  209325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:38:49.670145  209325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1221 18:38:49.670208  209325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:38:49.680144  209325 out.go:177] 
	W1221 18:38:49.681374  209325 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1221 18:38:49.681399  209325 out.go:239] * 
	* 
	W1221 18:38:49.682264  209325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 18:38:49.683696  209325 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-771629 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-21 18:38:49.703695125 +0000 UTC m=+2141.664216379
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-771629
helpers_test.go:235: (dbg) docker inspect running-upgrade-771629:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81",
	        "Created": "2023-12-21T18:37:45.065319221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-21T18:37:46.942202383Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81/hosts",
	        "LogPath": "/var/lib/docker/containers/9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81/9e77b1af8450bd3f8d264343c38eab78a70ed658408b353d1f90c7d8c9342e81-json.log",
	        "Name": "/running-upgrade-771629",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-771629:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/245a3a1031799ae1ef99e09bbc29287887730351d9922c84ab21564ecfef76b0-init/diff:/var/lib/docker/overlay2/70a650b72f9f7e3456ec84e6b9774e275bf2355936dccb98e22aeb00b5a28dc1/diff:/var/lib/docker/overlay2/75d700c79a2418cbf191aa029835e6c373dc80d3c93102ea7da532d4329007d0/diff:/var/lib/docker/overlay2/e46f08e4a9277839d8493102e1cd08eb3ddfc2297ac86724ee327798958576f4/diff:/var/lib/docker/overlay2/32999ad9cc600d5b3839f7d166eea2fe8d4124fba595706ee2695d141fc31971/diff:/var/lib/docker/overlay2/243733bbd49ca0504972f0e88773c4fc429ba89da5e4e38f23eea15798c9ee16/diff:/var/lib/docker/overlay2/b8c3aaff65ab6de364dc9bfc3197f486c87fcea7b056ed971f8aeaee95038816/diff:/var/lib/docker/overlay2/109397c40c704aca9427a90bb2ab26a80919a8bebb23ebebd04dc51c8f2cb9d5/diff:/var/lib/docker/overlay2/0c28bb7484337c0768a5f9d94b9e26fec0934393d429b0942544328afc1760e5/diff:/var/lib/docker/overlay2/b16d88eaf95dccd05c8b89b82f019d5e405616d8b0ec8602c971bd632a177fd0/diff:/var/lib/docker/overlay2/dcbcce
6f00369ccd1da973561da264ddf8f85a7b7e3697b8b953ba5088b74e38/diff:/var/lib/docker/overlay2/2b2d174f78b5c40e2c8763643dfbe5e89279979d203db0d24d316f8e50f16311/diff:/var/lib/docker/overlay2/3ace9ce9e3c76ee8827a5d0ed1eededc7ecc86517c46af53c3c0fa5671fdbbaa/diff:/var/lib/docker/overlay2/a7f6666f99481ec41ed604ec770f177e394d5539a84faff07c6ecf2266e18637/diff:/var/lib/docker/overlay2/d4d433d45aff18176f588c095013e58f2b44fba0f1ddac8af5afcb4c88fb1bf2/diff:/var/lib/docker/overlay2/c36544f53b8188e41d88e438d4421248f1a903cd7214572e10ab257dd9f909ec/diff:/var/lib/docker/overlay2/99f2fbfc91831a34d75c54f7d96107b4b1333c21b4f0e908cb8f076a4aa390d2/diff:/var/lib/docker/overlay2/50428b9a330da7c365b623a19eeb942a953e559ff755f9b0e7ebffb33405f34d/diff:/var/lib/docker/overlay2/13dc2057de471e75ab494139f0696e8c28f4ab6caaa140a6784c3c746a93b5bc/diff:/var/lib/docker/overlay2/958ba974e061176fb40f94702bd24a24eeb8e691e3caffedae59bce435984120/diff:/var/lib/docker/overlay2/26d5b1fa6c8853bcc4fcd6f47531200834e90736b02b85637b3a1dfae9c71969/diff:/var/lib/d
ocker/overlay2/ee71d7ee11398a8f5a36206ef9cb96b7b20fbbcc5579fa7e17405650ce2f2cf6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/245a3a1031799ae1ef99e09bbc29287887730351d9922c84ab21564ecfef76b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/245a3a1031799ae1ef99e09bbc29287887730351d9922c84ab21564ecfef76b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/245a3a1031799ae1ef99e09bbc29287887730351d9922c84ab21564ecfef76b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-771629",
	                "Source": "/var/lib/docker/volumes/running-upgrade-771629/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-771629",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-771629",
	                "name.minikube.sigs.k8s.io": "running-upgrade-771629",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae467dabff9017957455eea8371b9d3f51fd79f917885b8c4e12c932a54ab15f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ae467dabff90",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8d9fba104f8d35c4660dcf4d66403702471787662eb0b91e9e2de2d136dbcba2",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "0eb6dff77fafa3178896e135266dcbb395dcad0288d90d7b83648982116bd337",
	                    "EndpointID": "8d9fba104f8d35c4660dcf4d66403702471787662eb0b91e9e2de2d136dbcba2",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-771629 -n running-upgrade-771629
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-771629 -n running-upgrade-771629: exit status 4 (320.083788ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 18:38:50.005093  210415 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-771629" does not appear in /home/jenkins/minikube-integration/17848-9881/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-771629" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-771629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-771629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-771629: (1.906314652s)
--- FAIL: TestRunningBinaryUpgrade (69.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.1236458732.exe start -p stopped-upgrade-276178 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.1236458732.exe start -p stopped-upgrade-276178 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m40.360734904s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.1236458732.exe -p stopped-upgrade-276178 stop
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-276178 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-276178 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.961015807s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-276178] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-276178 in cluster stopped-upgrade-276178
	* Pulling base image v0.0.42-1702920864-17822 ...
	* Restarting existing docker container for "stopped-upgrade-276178" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:37:01.901910  186270 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:37:01.902061  186270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:37:01.902075  186270 out.go:309] Setting ErrFile to fd 2...
	I1221 18:37:01.902086  186270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:37:01.902304  186270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:37:01.902834  186270 out.go:303] Setting JSON to false
	I1221 18:37:01.904568  186270 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4769,"bootTime":1703179053,"procs":796,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:37:01.904626  186270 start.go:138] virtualization: kvm guest
	I1221 18:37:01.906714  186270 out.go:177] * [stopped-upgrade-276178] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:37:01.908594  186270 notify.go:220] Checking for updates...
	I1221 18:37:01.908603  186270 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:37:01.909953  186270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:37:01.911317  186270 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:37:01.912623  186270 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:37:01.913887  186270 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:37:01.915072  186270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:37:01.916732  186270 config.go:182] Loaded profile config "stopped-upgrade-276178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1221 18:37:01.916749  186270 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1221 18:37:01.918665  186270 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1221 18:37:01.920038  186270 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:37:01.942181  186270 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:37:01.942339  186270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:37:01.996335  186270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:75 SystemTime:2023-12-21 18:37:01.986829733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:37:01.996420  186270 docker.go:295] overlay module found
	I1221 18:37:01.998287  186270 out.go:177] * Using the docker driver based on existing profile
	I1221 18:37:01.999508  186270 start.go:298] selected driver: docker
	I1221 18:37:01.999519  186270 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-276178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-276178 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1221 18:37:01.999595  186270 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:37:02.000387  186270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:37:02.058242  186270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:75 SystemTime:2023-12-21 18:37:02.049693729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:37:02.058598  186270 cni.go:84] Creating CNI manager for ""
	I1221 18:37:02.058623  186270 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1221 18:37:02.058637  186270 start_flags.go:323] config:
	{Name:stopped-upgrade-276178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-276178 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1221 18:37:02.061133  186270 out.go:177] * Starting control plane node stopped-upgrade-276178 in cluster stopped-upgrade-276178
	I1221 18:37:02.062466  186270 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:37:02.063734  186270 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:37:02.064956  186270 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1221 18:37:02.065079  186270 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:37:02.081878  186270 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1221 18:37:02.081907  186270 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	W1221 18:37:02.483499  186270 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1221 18:37:02.483663  186270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/stopped-upgrade-276178/config.json ...
	I1221 18:37:02.483749  186270 cache.go:107] acquiring lock: {Name:mka5186396ae59f86f278a99a6c1581765c698cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483748  186270 cache.go:107] acquiring lock: {Name:mk2a91ea4b400647b716033dd6c94fef47eb3aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483771  186270 cache.go:107] acquiring lock: {Name:mkaacb473626ac5cfd28e9f25007033f9d726120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483750  186270 cache.go:107] acquiring lock: {Name:mk7f5b252037049d30335d827e13ab3cd86c6362 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483829  186270 cache.go:107] acquiring lock: {Name:mkde32120d37536ecf720d817c914de2664c90ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483897  186270 cache.go:107] acquiring lock: {Name:mkb4b08f64d808704a03ba98d01ca6d357c24642 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483946  186270 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1221 18:37:02.483958  186270 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1221 18:37:02.483965  186270 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1221 18:37:02.483934  186270 cache.go:107] acquiring lock: {Name:mka0839cb6a3f374936e75c7f60eb393966a0ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.483908  186270 cache.go:107] acquiring lock: {Name:mk298e300c1afe6bcd880c61e19d7e46d34c4985 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.484010  186270 cache.go:194] Successfully downloaded all kic artifacts
	I1221 18:37:02.484017  186270 cache.go:115] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1221 18:37:02.484044  186270 start.go:365] acquiring machines lock for stopped-upgrade-276178: {Name:mk7be42c29e75ec221abbc4ce37def5d876a226f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1221 18:37:02.484039  186270 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 299.119µs
	I1221 18:37:02.484053  186270 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:37:02.484065  186270 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1221 18:37:02.483993  186270 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1221 18:37:02.484098  186270 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1221 18:37:02.484120  186270 start.go:369] acquired machines lock for "stopped-upgrade-276178" in 63.53µs
	I1221 18:37:02.484123  186270 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1221 18:37:02.484137  186270 start.go:96] Skipping create...Using existing machine configuration
	I1221 18:37:02.484147  186270 fix.go:54] fixHost starting: m01
	I1221 18:37:02.484418  186270 cli_runner.go:164] Run: docker container inspect stopped-upgrade-276178 --format={{.State.Status}}
	I1221 18:37:02.485188  186270 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1221 18:37:02.485193  186270 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1221 18:37:02.485206  186270 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1221 18:37:02.485191  186270 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1221 18:37:02.485191  186270 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1221 18:37:02.485222  186270 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1221 18:37:02.485211  186270 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1221 18:37:02.508653  186270 fix.go:102] recreateIfNeeded on stopped-upgrade-276178: state=Stopped err=<nil>
	W1221 18:37:02.508676  186270 fix.go:128] unexpected machine state, will restart: <nil>
	I1221 18:37:02.510779  186270 out.go:177] * Restarting existing docker container for "stopped-upgrade-276178" ...
	I1221 18:37:02.511966  186270 cli_runner.go:164] Run: docker start stopped-upgrade-276178
	I1221 18:37:02.624181  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1221 18:37:02.649453  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1221 18:37:02.660635  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1221 18:37:02.670174  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1221 18:37:02.684427  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1221 18:37:02.690226  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1221 18:37:02.701644  186270 cache.go:162] opening:  /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1221 18:37:02.756817  186270 cli_runner.go:164] Run: docker container inspect stopped-upgrade-276178 --format={{.State.Status}}
	I1221 18:37:02.782297  186270 kic.go:430] container "stopped-upgrade-276178" state is running.
	I1221 18:37:02.782715  186270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-276178
	I1221 18:37:02.804992  186270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/stopped-upgrade-276178/config.json ...
	I1221 18:37:02.805305  186270 machine.go:88] provisioning docker machine ...
	I1221 18:37:02.805341  186270 ubuntu.go:169] provisioning hostname "stopped-upgrade-276178"
	I1221 18:37:02.805400  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:02.824387  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1221 18:37:02.824444  186270 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 340.576807ms
	I1221 18:37:02.824460  186270 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1221 18:37:02.827123  186270 main.go:141] libmachine: Using SSH client type: native
	I1221 18:37:02.827637  186270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1221 18:37:02.827663  186270 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-276178 && echo "stopped-upgrade-276178" | sudo tee /etc/hostname
	I1221 18:37:02.828310  186270 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55184->127.0.0.1:32968: read: connection reset by peer
	I1221 18:37:03.087528  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1221 18:37:03.087558  186270 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 603.737473ms
	I1221 18:37:03.087574  186270 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1221 18:37:03.349618  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1221 18:37:03.349731  186270 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 865.82447ms
	I1221 18:37:03.349786  186270 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1221 18:37:03.485393  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1221 18:37:03.485421  186270 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.001705615s
	I1221 18:37:03.485436  186270 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1221 18:37:03.489213  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1221 18:37:03.489302  186270 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.00553834s
	I1221 18:37:03.489320  186270 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1221 18:37:03.758571  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1221 18:37:03.758604  186270 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.274861489s
	I1221 18:37:03.758620  186270 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1221 18:37:04.143662  186270 cache.go:157] /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1221 18:37:04.143687  186270 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.659835518s
	I1221 18:37:04.143699  186270 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1221 18:37:04.143712  186270 cache.go:87] Successfully saved all images to host disk.
	I1221 18:37:05.941470  186270 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-276178
	
	I1221 18:37:05.941568  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:05.958000  186270 main.go:141] libmachine: Using SSH client type: native
	I1221 18:37:05.958320  186270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1221 18:37:05.958338  186270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-276178' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-276178/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-276178' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1221 18:37:06.060859  186270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1221 18:37:06.060892  186270 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17848-9881/.minikube CaCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17848-9881/.minikube}
	I1221 18:37:06.060921  186270 ubuntu.go:177] setting up certificates
	I1221 18:37:06.060934  186270 provision.go:83] configureAuth start
	I1221 18:37:06.060988  186270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-276178
	I1221 18:37:06.076693  186270 provision.go:138] copyHostCerts
	I1221 18:37:06.076775  186270 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem, removing ...
	I1221 18:37:06.076786  186270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem
	I1221 18:37:06.076850  186270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/ca.pem (1078 bytes)
	I1221 18:37:06.076939  186270 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem, removing ...
	I1221 18:37:06.076947  186270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem
	I1221 18:37:06.076973  186270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/cert.pem (1123 bytes)
	I1221 18:37:06.077025  186270 exec_runner.go:144] found /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem, removing ...
	I1221 18:37:06.077031  186270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem
	I1221 18:37:06.077051  186270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17848-9881/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17848-9881/.minikube/key.pem (1679 bytes)
	I1221 18:37:06.077101  186270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-276178 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-276178]
	I1221 18:37:06.363515  186270 provision.go:172] copyRemoteCerts
	I1221 18:37:06.363576  186270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1221 18:37:06.363642  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:06.380420  186270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/stopped-upgrade-276178/id_rsa Username:docker}
	I1221 18:37:06.460215  186270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1221 18:37:06.476314  186270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1221 18:37:06.492180  186270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1221 18:37:06.507824  186270 provision.go:86] duration metric: configureAuth took 446.865745ms
	I1221 18:37:06.507850  186270 ubuntu.go:193] setting minikube options for container-runtime
	I1221 18:37:06.508046  186270 config.go:182] Loaded profile config "stopped-upgrade-276178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1221 18:37:06.508131  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:06.524358  186270 main.go:141] libmachine: Using SSH client type: native
	I1221 18:37:06.524704  186270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809380] 0x80c060 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I1221 18:37:06.524729  186270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1221 18:37:07.046159  186270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1221 18:37:07.046195  186270 machine.go:91] provisioned docker machine in 4.240868823s
	I1221 18:37:07.046209  186270 start.go:300] post-start starting for "stopped-upgrade-276178" (driver="docker")
	I1221 18:37:07.046221  186270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1221 18:37:07.046322  186270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1221 18:37:07.046369  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:07.063597  186270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/stopped-upgrade-276178/id_rsa Username:docker}
	I1221 18:37:07.144912  186270 ssh_runner.go:195] Run: cat /etc/os-release
	I1221 18:37:07.147505  186270 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1221 18:37:07.147525  186270 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1221 18:37:07.147533  186270 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1221 18:37:07.147540  186270 info.go:137] Remote host: Ubuntu 19.10
	I1221 18:37:07.147548  186270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/addons for local assets ...
	I1221 18:37:07.147588  186270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17848-9881/.minikube/files for local assets ...
	I1221 18:37:07.147650  186270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem -> 166642.pem in /etc/ssl/certs
	I1221 18:37:07.147728  186270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1221 18:37:07.153801  186270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/ssl/certs/166642.pem --> /etc/ssl/certs/166642.pem (1708 bytes)
	I1221 18:37:07.170153  186270 start.go:303] post-start completed in 123.931673ms
	I1221 18:37:07.170223  186270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:37:07.170274  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:07.187333  186270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/stopped-upgrade-276178/id_rsa Username:docker}
	I1221 18:37:07.265709  186270 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1221 18:37:07.269623  186270 fix.go:56] fixHost completed within 4.785468876s
	I1221 18:37:07.269650  186270 start.go:83] releasing machines lock for "stopped-upgrade-276178", held for 4.785518492s
	I1221 18:37:07.269719  186270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-276178
	I1221 18:37:07.286201  186270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1221 18:37:07.286260  186270 ssh_runner.go:195] Run: cat /version.json
	I1221 18:37:07.286288  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:07.286316  186270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-276178
	I1221 18:37:07.303293  186270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/stopped-upgrade-276178/id_rsa Username:docker}
	I1221 18:37:07.303792  186270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/stopped-upgrade-276178/id_rsa Username:docker}
	W1221 18:37:07.407154  186270 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1221 18:37:07.407234  186270 ssh_runner.go:195] Run: systemctl --version
	I1221 18:37:07.411006  186270 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1221 18:37:07.459771  186270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1221 18:37:07.464087  186270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:37:07.479531  186270 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1221 18:37:07.479611  186270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1221 18:37:07.502310  186270 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1221 18:37:07.502331  186270 start.go:475] detecting cgroup driver to use...
	I1221 18:37:07.502359  186270 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1221 18:37:07.502396  186270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1221 18:37:07.522189  186270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1221 18:37:07.530517  186270 docker.go:203] disabling cri-docker service (if available) ...
	I1221 18:37:07.530562  186270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1221 18:37:07.538579  186270 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1221 18:37:07.546844  186270 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1221 18:37:07.554844  186270 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1221 18:37:07.554899  186270 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1221 18:37:07.621220  186270 docker.go:219] disabling docker service ...
	I1221 18:37:07.621348  186270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1221 18:37:07.630397  186270 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1221 18:37:07.639479  186270 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1221 18:37:07.707771  186270 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1221 18:37:07.771457  186270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1221 18:37:07.779968  186270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1221 18:37:07.791751  186270 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1221 18:37:07.791802  186270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1221 18:37:07.800886  186270 out.go:177] 
	W1221 18:37:07.802258  186270 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1221 18:37:07.802282  186270 out.go:239] * 
	* 
	W1221 18:37:07.803107  186270 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1221 18:37:07.804658  186270 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-276178 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (107.19s)

                                                
                                    

Test pass (283/316)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 35.62
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 1.1
10 TestDownloadOnly/v1.28.4/json-events 17.82
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 43.13
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.2
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
25 TestDownloadOnlyKic 1.28
26 TestBinaryMirror 0.72
27 TestOffline 50.33
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 151.56
34 TestAddons/parallel/Registry 16.44
36 TestAddons/parallel/InspektorGadget 10.68
37 TestAddons/parallel/MetricsServer 5.65
38 TestAddons/parallel/HelmTiller 11.32
40 TestAddons/parallel/CSI 76.38
41 TestAddons/parallel/Headlamp 17.3
42 TestAddons/parallel/CloudSpanner 5.49
43 TestAddons/parallel/LocalPath 65.03
44 TestAddons/parallel/NvidiaDevicePlugin 6.48
45 TestAddons/parallel/Yakd 6
48 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestAddons/StoppedEnableDisable 12.12
50 TestCertOptions 26.87
51 TestCertExpiration 233.39
53 TestForceSystemdFlag 27.65
54 TestForceSystemdEnv 38.97
56 TestKVMDriverInstallOrUpdate 4.6
60 TestErrorSpam/setup 21.2
61 TestErrorSpam/start 0.59
62 TestErrorSpam/status 0.84
63 TestErrorSpam/pause 1.43
64 TestErrorSpam/unpause 1.42
65 TestErrorSpam/stop 1.38
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 38.95
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 35.95
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.67
77 TestFunctional/serial/CacheCmd/cache/add_local 1.89
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.11
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
85 TestFunctional/serial/ExtraConfig 32.5
86 TestFunctional/serial/ComponentHealth 0.06
87 TestFunctional/serial/LogsCmd 1.27
88 TestFunctional/serial/LogsFileCmd 1.28
89 TestFunctional/serial/InvalidService 4.48
91 TestFunctional/parallel/ConfigCmd 0.43
92 TestFunctional/parallel/DashboardCmd 19.31
93 TestFunctional/parallel/DryRun 0.41
94 TestFunctional/parallel/InternationalLanguage 0.2
95 TestFunctional/parallel/StatusCmd 1.15
99 TestFunctional/parallel/ServiceCmdConnect 8.67
100 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/PersistentVolumeClaim 37.8
103 TestFunctional/parallel/SSHCmd 0.49
104 TestFunctional/parallel/CpCmd 1.91
105 TestFunctional/parallel/MySQL 19.28
106 TestFunctional/parallel/FileSync 0.25
107 TestFunctional/parallel/CertSync 1.49
111 TestFunctional/parallel/NodeLabels 0.05
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
115 TestFunctional/parallel/License 0.63
116 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
118 TestFunctional/parallel/ProfileCmd/profile_list 0.51
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
120 TestFunctional/parallel/MountCmd/any-port 8.04
121 TestFunctional/parallel/ServiceCmd/List 0.5
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
123 TestFunctional/parallel/MountCmd/specific-port 1.82
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
125 TestFunctional/parallel/ServiceCmd/Format 0.34
126 TestFunctional/parallel/ServiceCmd/URL 0.34
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.36
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.46
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
135 TestFunctional/parallel/ImageCommands/Setup 2.44
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.21
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.75
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.48
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.5
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.76
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
157 TestFunctional/delete_addon-resizer_images 0.07
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestIngressAddonLegacy/StartLegacyK8sCluster 84.98
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.22
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
170 TestJSONOutput/start/Command 38.53
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.64
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.56
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 5.74
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.22
195 TestKicCustomNetwork/create_custom_network 38.93
196 TestKicCustomNetwork/use_default_bridge_network 26.51
197 TestKicExistingNetwork 23.34
198 TestKicCustomSubnet 23.64
199 TestKicStaticIP 26.5
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 50.46
204 TestMountStart/serial/StartWithMountFirst 5.91
205 TestMountStart/serial/VerifyMountFirst 0.24
206 TestMountStart/serial/StartWithMountSecond 8.73
207 TestMountStart/serial/VerifyMountSecond 0.24
208 TestMountStart/serial/DeleteFirst 1.58
209 TestMountStart/serial/VerifyMountPostDelete 0.24
210 TestMountStart/serial/Stop 1.21
211 TestMountStart/serial/RestartStopped 7.58
212 TestMountStart/serial/VerifyMountPostStop 0.24
215 TestMultiNode/serial/FreshStart2Nodes 80.11
216 TestMultiNode/serial/DeployApp2Nodes 5.36
218 TestMultiNode/serial/AddNode 18.92
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.26
221 TestMultiNode/serial/CopyFile 8.67
222 TestMultiNode/serial/StopNode 2.06
223 TestMultiNode/serial/StartAfterStop 10.78
224 TestMultiNode/serial/RestartKeepsNodes 116.03
225 TestMultiNode/serial/DeleteNode 4.61
226 TestMultiNode/serial/StopMultiNode 23.76
227 TestMultiNode/serial/RestartMultiNode 74.19
228 TestMultiNode/serial/ValidateNameConflict 25.24
233 TestPreload 152.69
235 TestScheduledStopUnix 99.04
238 TestInsufficientStorage 10.24
241 TestKubernetesUpgrade 368.85
242 TestMissingContainerUpgrade 172.07
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
245 TestNoKubernetes/serial/StartWithK8s 37.28
246 TestNoKubernetes/serial/StartWithStopK8s 9.31
247 TestNoKubernetes/serial/Start 5.31
251 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
252 TestNoKubernetes/serial/ProfileList 1.13
253 TestNoKubernetes/serial/Stop 1.22
258 TestNetworkPlugins/group/false 3.78
259 TestNoKubernetes/serial/StartNoArgs 7.07
263 TestStoppedBinaryUpgrade/Setup 2.14
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.5
275 TestPause/serial/Start 51.01
276 TestNetworkPlugins/group/auto/Start 75.23
277 TestNetworkPlugins/group/kindnet/Start 71.95
278 TestPause/serial/SecondStartNoReconfiguration 36.42
279 TestNetworkPlugins/group/auto/KubeletFlags 0.31
280 TestNetworkPlugins/group/auto/NetCatPod 10.24
281 TestPause/serial/Pause 0.65
282 TestPause/serial/VerifyStatus 0.31
283 TestNetworkPlugins/group/auto/DNS 0.15
284 TestNetworkPlugins/group/auto/Localhost 0.15
285 TestPause/serial/Unpause 0.69
286 TestNetworkPlugins/group/auto/HairPin 0.13
287 TestPause/serial/PauseAgain 0.75
288 TestPause/serial/DeletePaused 2.55
289 TestPause/serial/VerifyDeletedResources 15.22
290 TestNetworkPlugins/group/calico/Start 70.09
291 TestNetworkPlugins/group/custom-flannel/Start 60.39
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
295 TestNetworkPlugins/group/kindnet/DNS 0.14
296 TestNetworkPlugins/group/kindnet/Localhost 0.15
297 TestNetworkPlugins/group/kindnet/HairPin 0.14
298 TestNetworkPlugins/group/flannel/Start 62.05
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/custom-flannel/DNS 0.15
303 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
304 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
305 TestNetworkPlugins/group/calico/KubeletFlags 0.26
306 TestNetworkPlugins/group/calico/NetCatPod 11.19
307 TestNetworkPlugins/group/calico/DNS 0.16
308 TestNetworkPlugins/group/calico/Localhost 0.12
309 TestNetworkPlugins/group/calico/HairPin 0.15
310 TestNetworkPlugins/group/bridge/Start 39.23
311 TestNetworkPlugins/group/enable-default-cni/Start 41.1
312 TestNetworkPlugins/group/flannel/ControllerPod 6.01
314 TestStartStop/group/old-k8s-version/serial/FirstStart 123.46
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
316 TestNetworkPlugins/group/flannel/NetCatPod 12.21
317 TestNetworkPlugins/group/flannel/DNS 0.16
318 TestNetworkPlugins/group/flannel/Localhost 0.13
319 TestNetworkPlugins/group/flannel/HairPin 0.12
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
321 TestNetworkPlugins/group/bridge/NetCatPod 9.3
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
324 TestNetworkPlugins/group/bridge/DNS 0.14
325 TestNetworkPlugins/group/bridge/Localhost 0.13
326 TestNetworkPlugins/group/bridge/HairPin 0.14
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
331 TestStartStop/group/no-preload/serial/FirstStart 57.18
333 TestStartStop/group/embed-certs/serial/FirstStart 42.58
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.21
336 TestStartStop/group/no-preload/serial/DeployApp 11.24
337 TestStartStop/group/embed-certs/serial/DeployApp 10.24
338 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
340 TestStartStop/group/embed-certs/serial/Stop 12.08
341 TestStartStop/group/no-preload/serial/Stop 12.01
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/no-preload/serial/SecondStart 607.07
345 TestStartStop/group/embed-certs/serial/SecondStart 337.33
346 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
349 TestStartStop/group/old-k8s-version/serial/Stop 11.9
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
353 TestStartStop/group/old-k8s-version/serial/SecondStart 65.7
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.17
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
359 TestStartStop/group/old-k8s-version/serial/Pause 2.64
361 TestStartStop/group/newest-cni/serial/FirstStart 36.11
362 TestStartStop/group/newest-cni/serial/DeployApp 0
363 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
364 TestStartStop/group/newest-cni/serial/Stop 1.22
365 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
366 TestStartStop/group/newest-cni/serial/SecondStart 25.55
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
370 TestStartStop/group/newest-cni/serial/Pause 2.57
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
374 TestStartStop/group/embed-certs/serial/Pause 2.6
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/no-preload/serial/Pause 2.51
x
+
TestDownloadOnly/v1.16.0/json-events (35.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (35.615895911s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (35.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (1.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664125
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664125: exit status 85 (1.094967836s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-664125        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:08.135438   16676 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:08.135548   16676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:08.135556   16676 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:08.135560   16676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:08.135722   16676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	W1221 18:03:08.135831   16676 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: no such file or directory
	I1221 18:03:08.136366   16676 out.go:303] Setting JSON to true
	I1221 18:03:08.137160   16676 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2735,"bootTime":1703179053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:03:08.137210   16676 start.go:138] virtualization: kvm guest
	I1221 18:03:08.139464   16676 out.go:97] [download-only-664125] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:03:08.140957   16676 out.go:169] MINIKUBE_LOCATION=17848
	W1221 18:03:08.139597   16676 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball: no such file or directory
	I1221 18:03:08.139652   16676 notify.go:220] Checking for updates...
	I1221 18:03:08.143787   16676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:08.145202   16676 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:03:08.146549   16676 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:03:08.147866   16676 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 18:03:08.150380   16676 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:03:08.150613   16676 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:03:08.172006   16676 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:08.172091   16676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:08.499013   16676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-21 18:03:08.490527279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:03:08.499113   16676 docker.go:295] overlay module found
	I1221 18:03:08.500813   16676 out.go:97] Using the docker driver based on user configuration
	I1221 18:03:08.500854   16676 start.go:298] selected driver: docker
	I1221 18:03:08.500862   16676 start.go:902] validating driver "docker" against <nil>
	I1221 18:03:08.500958   16676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:08.548674   16676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-12-21 18:03:08.540596947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:03:08.548820   16676 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1221 18:03:08.549298   16676 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1221 18:03:08.549447   16676 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1221 18:03:08.551309   16676 out.go:169] Using Docker driver with root privileges
	I1221 18:03:08.552820   16676 cni.go:84] Creating CNI manager for ""
	I1221 18:03:08.552834   16676 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:03:08.552846   16676 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1221 18:03:08.552855   16676 start_flags.go:323] config:
	{Name:download-only-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:08.554338   16676 out.go:97] Starting control plane node download-only-664125 in cluster download-only-664125
	I1221 18:03:08.554353   16676 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:03:08.555589   16676 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:08.555612   16676 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1221 18:03:08.555655   16676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:08.569703   16676 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:08.569857   16676 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:08.569958   16676 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:08.677364   16676 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1221 18:03:08.677388   16676 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:08.677556   16676 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1221 18:03:08.679544   16676 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1221 18:03:08.679561   16676 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:03:08.797305   16676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1221 18:03:21.571578   16676 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:03:23.460959   16676 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:03:23.461055   16676 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:03:24.362953   16676 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1221 18:03:24.363285   16676 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/download-only-664125/config.json ...
	I1221 18:03:24.363315   16676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/download-only-664125/config.json: {Name:mk97916df386f131781170267f9010494ebc550e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1221 18:03:24.363489   16676 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1221 18:03:24.363685   16676 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-664125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (1.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (17.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.818814309s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (17.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664125
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664125: exit status 85 (69.958611ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-664125        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-664125        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:03:44
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:03:44.849845   16899 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:03:44.850064   16899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:44.850072   16899 out.go:309] Setting ErrFile to fd 2...
	I1221 18:03:44.850076   16899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:03:44.850244   16899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	W1221 18:03:44.850360   16899 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: no such file or directory
	I1221 18:03:44.850756   16899 out.go:303] Setting JSON to true
	I1221 18:03:44.851546   16899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2772,"bootTime":1703179053,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:03:44.851642   16899 start.go:138] virtualization: kvm guest
	I1221 18:03:44.888764   16899 out.go:97] [download-only-664125] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:03:44.888970   16899 notify.go:220] Checking for updates...
	I1221 18:03:44.985525   16899 out.go:169] MINIKUBE_LOCATION=17848
	I1221 18:03:45.108763   16899 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:03:45.193742   16899 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:03:45.321036   16899 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:03:45.351926   16899 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 18:03:45.354731   16899 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:03:45.355205   16899 config.go:182] Loaded profile config "download-only-664125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1221 18:03:45.355253   16899 start.go:810] api.Load failed for download-only-664125: filestore "download-only-664125": Docker machine "download-only-664125" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:45.355333   16899 driver.go:392] Setting default libvirt URI to qemu:///system
	W1221 18:03:45.355362   16899 start.go:810] api.Load failed for download-only-664125: filestore "download-only-664125": Docker machine "download-only-664125" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:03:45.375017   16899 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:03:45.375110   16899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:45.428139   16899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:03:45.419905362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:03:45.428345   16899 docker.go:295] overlay module found
	I1221 18:03:45.430210   16899 out.go:97] Using the docker driver based on existing profile
	I1221 18:03:45.430241   16899 start.go:298] selected driver: docker
	I1221 18:03:45.430248   16899 start.go:902] validating driver "docker" against &{Name:download-only-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-664125 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:03:45.430378   16899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:03:45.484262   16899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:03:45.476390621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:03:45.484873   16899 cni.go:84] Creating CNI manager for ""
	I1221 18:03:45.484895   16899 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:03:45.484905   16899 start_flags.go:323] config:
	{Name:download-only-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1221 18:03:45.486696   16899 out.go:97] Starting control plane node download-only-664125 in cluster download-only-664125
	I1221 18:03:45.486722   16899 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:03:45.488117   16899 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:03:45.488148   16899 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:03:45.488248   16899 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:03:45.502499   16899 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:03:45.502595   16899 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:03:45.502609   16899 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:03:45.502617   16899 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:03:45.502627   16899 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:03:45.601927   16899 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1221 18:03:45.601962   16899 cache.go:56] Caching tarball of preloaded images
	I1221 18:03:45.602125   16899 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1221 18:03:45.604293   16899 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1221 18:03:45.604317   16899 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:03:45.719613   16899 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-664125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (43.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664125 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (43.129113719s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (43.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664125
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664125: exit status 85 (70.62257ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-664125           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:03 UTC |          |
	|         | -p download-only-664125           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-664125 | jenkins | v1.32.0 | 21 Dec 23 18:04 UTC |          |
	|         | -p download-only-664125           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/21 18:04:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1221 18:04:02.738179   17068 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:04:02.738287   17068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:04:02.738297   17068 out.go:309] Setting ErrFile to fd 2...
	I1221 18:04:02.738302   17068 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:04:02.738487   17068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	W1221 18:04:02.738576   17068 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: open /home/jenkins/minikube-integration/17848-9881/.minikube/config/config.json: no such file or directory
	I1221 18:04:02.738962   17068 out.go:303] Setting JSON to true
	I1221 18:04:02.739709   17068 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2790,"bootTime":1703179053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:04:02.739764   17068 start.go:138] virtualization: kvm guest
	I1221 18:04:02.742093   17068 out.go:97] [download-only-664125] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:04:02.743631   17068 out.go:169] MINIKUBE_LOCATION=17848
	I1221 18:04:02.742250   17068 notify.go:220] Checking for updates...
	I1221 18:04:02.746501   17068 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:04:02.747876   17068 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:04:02.749303   17068 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:04:02.750676   17068 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1221 18:04:02.753115   17068 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1221 18:04:02.753564   17068 config.go:182] Loaded profile config "download-only-664125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1221 18:04:02.753617   17068 start.go:810] api.Load failed for download-only-664125: filestore "download-only-664125": Docker machine "download-only-664125" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:04:02.753703   17068 driver.go:392] Setting default libvirt URI to qemu:///system
	W1221 18:04:02.753745   17068 start.go:810] api.Load failed for download-only-664125: filestore "download-only-664125": Docker machine "download-only-664125" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1221 18:04:02.773312   17068 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:04:02.773388   17068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:04:02.824614   17068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:04:02.816333309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:04:02.824707   17068 docker.go:295] overlay module found
	I1221 18:04:02.826665   17068 out.go:97] Using the docker driver based on existing profile
	I1221 18:04:02.826696   17068 start.go:298] selected driver: docker
	I1221 18:04:02.826702   17068 start.go:902] validating driver "docker" against &{Name:download-only-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-664125 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:04:02.826853   17068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:04:02.884293   17068 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-12-21 18:04:02.876677342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:04:02.885164   17068 cni.go:84] Creating CNI manager for ""
	I1221 18:04:02.885194   17068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1221 18:04:02.885211   17068 start_flags.go:323] config:
	{Name:download-only-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1221 18:04:02.887158   17068 out.go:97] Starting control plane node download-only-664125 in cluster download-only-664125
	I1221 18:04:02.887178   17068 cache.go:121] Beginning downloading kic base image for docker with crio
	I1221 18:04:02.888570   17068 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1221 18:04:02.888592   17068 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1221 18:04:02.888684   17068 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1221 18:04:02.903056   17068 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1221 18:04:02.903175   17068 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1221 18:04:02.903194   17068 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1221 18:04:02.903202   17068 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1221 18:04:02.903218   17068 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1221 18:04:03.323310   17068 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 18:04:03.323340   17068 cache.go:56] Caching tarball of preloaded images
	I1221 18:04:03.323485   17068 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1221 18:04:03.325194   17068 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1221 18:04:03.325215   17068 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:04:03.442984   17068 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1221 18:04:17.120333   17068 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:04:17.120410   17068 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17848-9881/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1221 18:04:17.932218   17068 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1221 18:04:17.932323   17068 profile.go:148] Saving config to /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/download-only-664125/config.json ...
	I1221 18:04:17.932519   17068 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1221 18:04:17.932683   17068 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17848-9881/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-664125"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-664125
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-939435 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-939435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-939435
--- PASS: TestDownloadOnlyKic (1.28s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-832088 --alsologtostderr --binary-mirror http://127.0.0.1:44331 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-832088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-832088
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (50.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-809623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-809623 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (47.976312863s)
helpers_test.go:175: Cleaning up "offline-crio-809623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-809623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-809623: (2.357040922s)
--- PASS: TestOffline (50.33s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-443778
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-443778: exit status 85 (63.10772ms)

                                                
                                                
-- stdout --
	* Profile "addons-443778" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-443778"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-443778
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-443778: exit status 85 (64.575605ms)

                                                
                                                
-- stdout --
	* Profile "addons-443778" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-443778"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (151.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-443778 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-443778 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.555591444s)
--- PASS: TestAddons/Setup (151.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 14.011989ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-hwsjn" [d18312cb-1683-475a-9ef7-6ab05125ff04] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020351593s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vvt2k" [756697ed-d980-4a8f-817f-861ce02cbf7a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004229774s
addons_test.go:340: (dbg) Run:  kubectl --context addons-443778 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-443778 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-443778 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.597621453s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 ip
2023/12/21 18:07:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jf7w2" [d5f45b36-6080-4eff-bebb-768ccfd4d2a0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00526515s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-443778
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-443778: (5.669823454s)
--- PASS: TestAddons/parallel/InspektorGadget (10.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 14.207246ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-gwrhh" [457dc7a0-b171-45a7-845e-0ddc04fd4f40] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010105242s
addons_test.go:415: (dbg) Run:  kubectl --context addons-443778 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 11.793931ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gtq28" [a26b272e-503d-4266-bcfc-55f9edf733a2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009199805s
addons_test.go:473: (dbg) Run:  kubectl --context addons-443778 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-443778 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.664299267s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.615249ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-443778 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-443778 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2bf1e4db-3fb3-49ae-900f-86a21d6f8de5] Pending
helpers_test.go:344: "task-pv-pod" [2bf1e4db-3fb3-49ae-900f-86a21d6f8de5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2bf1e4db-3fb3-49ae-900f-86a21d6f8de5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003880601s
addons_test.go:584: (dbg) Run:  kubectl --context addons-443778 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443778 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443778 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-443778 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-443778 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-443778 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-443778 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d4ba8d98-4ace-4f1d-b4f7-0fd703eb80d1] Pending
helpers_test.go:344: "task-pv-pod-restore" [d4ba8d98-4ace-4f1d-b4f7-0fd703eb80d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d4ba8d98-4ace-4f1d-b4f7-0fd703eb80d1] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003516229s
addons_test.go:626: (dbg) Run:  kubectl --context addons-443778 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-443778 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-443778 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-443778 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.651229199s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (76.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-443778 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-443778 --alsologtostderr -v=1: (1.293334621s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-r2xfj" [1a8e9fb9-826a-424c-89a8-72181f3f278c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-r2xfj" [1a8e9fb9-826a-424c-89a8-72181f3f278c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004522217s
--- PASS: TestAddons/parallel/Headlamp (17.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-qtrt5" [f5f0c7d6-e112-4f8e-b5cc-75ce23fc310e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009447542s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-443778
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (65.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-443778 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-443778 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [03314d30-f651-4ac6-8821-b0e3b2e35803] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [03314d30-f651-4ac6-8821-b0e3b2e35803] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [03314d30-f651-4ac6-8821-b0e3b2e35803] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 14.003822426s
addons_test.go:891: (dbg) Run:  kubectl --context addons-443778 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 ssh "cat /opt/local-path-provisioner/pvc-4dcfabe1-8499-4626-b912-956875087aab_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-443778 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-443778 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-443778 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-443778 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.22966002s)
--- PASS: TestAddons/parallel/LocalPath (65.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7jcgx" [b94cc79e-0503-4022-93ae-c3ab0f768f0c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004254016s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-443778
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-v4mrb" [7acf9f63-afd6-4ef6-9a5a-d8810d3c6e13] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003583463s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-443778 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-443778 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-443778
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-443778: (11.853153001s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-443778
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-443778
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-443778
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (26.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-261760 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-261760 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.206737844s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-261760 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-261760 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-261760 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-261760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-261760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-261760: (2.072509242s)
--- PASS: TestCertOptions (26.87s)

                                                
                                    
x
+
TestCertExpiration (233.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-871049 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-871049 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.10152862s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-871049 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-871049 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.803697654s)
helpers_test.go:175: Cleaning up "cert-expiration-871049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-871049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-871049: (2.481315494s)
--- PASS: TestCertExpiration (233.39s)

                                                
                                    
x
+
TestForceSystemdFlag (27.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-792822 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1221 18:37:19.987430   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:37:25.907761   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-792822 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.201195039s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-792822 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-792822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-792822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-792822: (3.173650351s)
--- PASS: TestForceSystemdFlag (27.65s)

                                                
                                    
x
+
TestForceSystemdEnv (38.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-841317 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-841317 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.539993525s)
helpers_test.go:175: Cleaning up "force-systemd-env-841317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-841317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-841317: (2.428153757s)
--- PASS: TestForceSystemdEnv (38.97s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.6s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.60s)

                                                
                                    
x
+
TestErrorSpam/setup (21.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-416394 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416394 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-416394 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416394 --driver=docker  --container-runtime=crio: (21.199274591s)
--- PASS: TestErrorSpam/setup (21.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 stop: (1.192730281s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-416394 --log_dir /tmp/nospam-416394 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17848-9881/.minikube/files/etc/test/nested/copy/16664/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-209430 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.949462336s)
--- PASS: TestFunctional/serial/StartWithProxy (38.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --alsologtostderr -v=8
E1221 18:12:19.986937   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:19.992650   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.002897   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.023137   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.063341   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.143664   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.304110   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:20.624668   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:21.265576   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:22.545896   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:25.107039   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:12:30.227524   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-209430 --alsologtostderr -v=8: (35.953596328s)
functional_test.go:659: soft start took 35.954254714s for "functional-209430" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-209430 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache add registry.k8s.io/pause:3.3
E1221 18:12:40.468204   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-209430 /tmp/TestFunctionalserialCacheCmdcacheadd_local500675625/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache add minikube-local-cache-test:functional-209430
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 cache add minikube-local-cache-test:functional-209430: (1.57771266s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache delete minikube-local-cache-test:functional-209430
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-209430
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.215685ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 kubectl -- --context functional-209430 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-209430 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1221 18:13:00.948471   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-209430 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.498704705s)
functional_test.go:757: restart took 32.498904502s for "functional-209430" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-209430 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 logs: (1.267629066s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 logs --file /tmp/TestFunctionalserialLogsFileCmd3933380654/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 logs --file /tmp/TestFunctionalserialLogsFileCmd3933380654/001/logs.txt: (1.2815515s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-209430 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-209430
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-209430: exit status 115 (318.700301ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31404 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-209430 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-209430 delete -f testdata/invalidsvc.yaml: (1.002791786s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 config get cpus: exit status 14 (89.865978ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 config get cpus: exit status 14 (63.984359ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-209430 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-209430 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49805: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-209430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.397931ms)

                                                
                                                
-- stdout --
	* [functional-209430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:13:29.009305   49312 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:13:29.010021   49312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:29.010036   49312 out.go:309] Setting ErrFile to fd 2...
	I1221 18:13:29.010043   49312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:29.010327   49312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:13:29.010967   49312 out.go:303] Setting JSON to false
	I1221 18:13:29.012174   49312 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3356,"bootTime":1703179053,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:13:29.012250   49312 start.go:138] virtualization: kvm guest
	I1221 18:13:29.014088   49312 out.go:177] * [functional-209430] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:13:29.015954   49312 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:13:29.015956   49312 notify.go:220] Checking for updates...
	I1221 18:13:29.017448   49312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:13:29.018805   49312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:13:29.020154   49312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:13:29.021554   49312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:13:29.022948   49312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:13:29.024484   49312 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:13:29.024901   49312 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:13:29.049533   49312 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:13:29.049642   49312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:13:29.131922   49312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:49 SystemTime:2023-12-21 18:13:29.116483248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:13:29.132067   49312 docker.go:295] overlay module found
	I1221 18:13:29.134938   49312 out.go:177] * Using the docker driver based on existing profile
	I1221 18:13:29.136329   49312 start.go:298] selected driver: docker
	I1221 18:13:29.136347   49312 start.go:902] validating driver "docker" against &{Name:functional-209430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-209430 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:13:29.136470   49312 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:13:29.138855   49312 out.go:177] 
	W1221 18:13:29.140116   49312 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1221 18:13:29.141389   49312 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-209430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-209430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.26559ms)

                                                
                                                
-- stdout --
	* [functional-209430] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:13:28.821435   49145 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:13:28.821592   49145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:28.821625   49145 out.go:309] Setting ErrFile to fd 2...
	I1221 18:13:28.821632   49145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:13:28.821918   49145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:13:28.822567   49145 out.go:303] Setting JSON to false
	I1221 18:13:28.823853   49145 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3356,"bootTime":1703179053,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:13:28.823939   49145 start.go:138] virtualization: kvm guest
	I1221 18:13:28.827246   49145 out.go:177] * [functional-209430] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1221 18:13:28.828589   49145 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:13:28.828831   49145 notify.go:220] Checking for updates...
	I1221 18:13:28.831313   49145 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:13:28.832692   49145 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:13:28.834327   49145 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:13:28.838296   49145 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:13:28.839882   49145 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:13:28.841812   49145 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:13:28.842331   49145 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:13:28.869166   49145 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:13:28.869304   49145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:13:28.933623   49145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-12-21 18:13:28.925183765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:13:28.933750   49145 docker.go:295] overlay module found
	I1221 18:13:28.935724   49145 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1221 18:13:28.937167   49145 start.go:298] selected driver: docker
	I1221 18:13:28.937181   49145 start.go:902] validating driver "docker" against &{Name:functional-209430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-209430 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1221 18:13:28.937291   49145 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:13:28.939540   49145 out.go:177] 
	W1221 18:13:28.941190   49145 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1221 18:13:28.942950   49145 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-209430 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-209430 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xrfmj" [7152ed8e-bb43-4cfe-ac27-207640e639f9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xrfmj" [7152ed8e-bb43-4cfe-ac27-207640e639f9] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003549013s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31245
functional_test.go:1674: http://192.168.49.2:31245: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xrfmj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31245
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3f3831bc-26d1-45df-83d8-08e6c646b3e1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003567853s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-209430 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-209430 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-209430 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-209430 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d57e583e-f08c-4e37-ae2b-e20bc114b8cd] Pending
helpers_test.go:344: "sp-pod" [d57e583e-f08c-4e37-ae2b-e20bc114b8cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d57e583e-f08c-4e37-ae2b-e20bc114b8cd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003924131s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-209430 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-209430 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-209430 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98a10215-b5a6-4db7-84e8-a6c95b733e77] Pending
helpers_test.go:344: "sp-pod" [98a10215-b5a6-4db7-84e8-a6c95b733e77] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [98a10215-b5a6-4db7-84e8-a6c95b733e77] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.060913263s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-209430 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh -n functional-209430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cp functional-209430:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1487248922/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh -n functional-209430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh -n functional-209430 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-209430 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-x5v7g" [4f44c4e9-7d6b-40e0-b804-f95426eb7c4d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-x5v7g" [4f44c4e9-7d6b-40e0-b804-f95426eb7c4d] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004068631s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-209430 exec mysql-859648c796-x5v7g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/16664/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /etc/test/nested/copy/16664/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/16664.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /etc/ssl/certs/16664.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/16664.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /usr/share/ca-certificates/16664.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/166642.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /etc/ssl/certs/166642.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/166642.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /usr/share/ca-certificates/166642.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-209430 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "sudo systemctl is-active docker": exit status 1 (251.091778ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "sudo systemctl is-active containerd": exit status 1 (248.325715ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-209430 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-209430 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-w7hwl" [6382327c-e831-4cd5-8404-e8770d5400de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-w7hwl" [6382327c-e831-4cd5-8404-e8770d5400de] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003744387s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "437.520114ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "67.716474ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "345.195791ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "66.79326ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdany-port506561127/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703182407407662664" to /tmp/TestFunctionalparallelMountCmdany-port506561127/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703182407407662664" to /tmp/TestFunctionalparallelMountCmdany-port506561127/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703182407407662664" to /tmp/TestFunctionalparallelMountCmdany-port506561127/001/test-1703182407407662664
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.323415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 21 18:13 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 21 18:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 21 18:13 test-1703182407407662664
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh cat /mount-9p/test-1703182407407662664
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-209430 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae30db84-b4ef-4c2e-aadd-76a110233d3d] Pending
helpers_test.go:344: "busybox-mount" [ae30db84-b4ef-4c2e-aadd-76a110233d3d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae30db84-b4ef-4c2e-aadd-76a110233d3d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae30db84-b4ef-4c2e-aadd-76a110233d3d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003953435s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-209430 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdany-port506561127/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service list -o json
functional_test.go:1493: Took "549.95576ms" to run "out/minikube-linux-amd64 -p functional-209430 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdspecific-port1659527678/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (342.173758ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdspecific-port1659527678/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "sudo umount -f /mount-9p": exit status 1 (266.37446ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-209430 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdspecific-port1659527678/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32618
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32618
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T" /mount1: exit status 1 (294.916671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-209430 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-209430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665701357/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-209430 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-209430
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-209430 image ls --format short --alsologtostderr:
I1221 18:14:00.067551   55322 out.go:296] Setting OutFile to fd 1 ...
I1221 18:14:00.067823   55322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.067832   55322 out.go:309] Setting ErrFile to fd 2...
I1221 18:14:00.067836   55322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.068031   55322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
I1221 18:14:00.068600   55322 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.068710   55322 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.069090   55322 cli_runner.go:164] Run: docker container inspect functional-209430 --format={{.State.Status}}
I1221 18:14:00.084781   55322 ssh_runner.go:195] Run: systemctl --version
I1221 18:14:00.084851   55322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209430
I1221 18:14:00.100212   55322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/functional-209430/id_rsa Username:docker}
I1221 18:14:00.185545   55322 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-209430 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-209430  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 529b5644c430c | 44.4MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-209430 image ls --format table --alsologtostderr:
I1221 18:14:00.600173   55556 out.go:296] Setting OutFile to fd 1 ...
I1221 18:14:00.600408   55556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.600416   55556 out.go:309] Setting ErrFile to fd 2...
I1221 18:14:00.600421   55556 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.600636   55556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
I1221 18:14:00.601204   55556 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.601332   55556 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.601735   55556 cli_runner.go:164] Run: docker container inspect functional-209430 --format={{.State.Status}}
I1221 18:14:00.617953   55556 ssh_runner.go:195] Run: systemctl --version
I1221 18:14:00.617999   55556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209430
I1221 18:14:00.633563   55556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/functional-209430/id_rsa Username:docker}
I1221 18:14:00.712910   55556 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-209430 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9","docker.io/library/nginx@sha256:bd30b8d47b230de52431cc71c5cce149b8d5d4c87c204902acf2504435d4b4c9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190866888"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/b
usybox:1.28.4-glibc"],"size":"4631262"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a
1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"]
,"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.
4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":["docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3
332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44405005"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-209430"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dc
ddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-209430 image ls --format json --alsologtostderr:
I1221 18:14:00.382693   55463 out.go:296] Setting OutFile to fd 1 ...
I1221 18:14:00.382950   55463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.382959   55463 out.go:309] Setting ErrFile to fd 2...
I1221 18:14:00.382964   55463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.383213   55463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
I1221 18:14:00.383824   55463 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.383959   55463 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.384380   55463 cli_runner.go:164] Run: docker container inspect functional-209430 --format={{.State.Status}}
I1221 18:14:00.400989   55463 ssh_runner.go:195] Run: systemctl --version
I1221 18:14:00.401039   55463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209430
I1221 18:14:00.419135   55463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/functional-209430/id_rsa Username:docker}
I1221 18:14:00.501314   55463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-209430 image ls --format yaml --alsologtostderr:
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests:
- docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "44405005"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-209430
size: "34114467"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
- docker.io/library/nginx@sha256:bd30b8d47b230de52431cc71c5cce149b8d5d4c87c204902acf2504435d4b4c9
repoTags:
- docker.io/library/nginx:latest
size: "190866888"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-209430 image ls --format yaml --alsologtostderr:
I1221 18:14:00.160961   55353 out.go:296] Setting OutFile to fd 1 ...
I1221 18:14:00.161225   55353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.161255   55353 out.go:309] Setting ErrFile to fd 2...
I1221 18:14:00.161263   55353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1221 18:14:00.161493   55353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
I1221 18:14:00.162071   55353 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.162179   55353 config.go:182] Loaded profile config "functional-209430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1221 18:14:00.162601   55353 cli_runner.go:164] Run: docker container inspect functional-209430 --format={{.State.Status}}
I1221 18:14:00.179017   55353 ssh_runner.go:195] Run: systemctl --version
I1221 18:14:00.179061   55353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209430
I1221 18:14:00.194892   55353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/functional-209430/id_rsa Username:docker}
I1221 18:14:00.277260   55353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.416105154s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-209430
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52398: os: process already finished
helpers_test.go:508: unable to kill pid 52239: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-209430 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [adb67598-871b-41a0-a039-6e1343f08048] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [adb67598-871b-41a0-a039-6e1343f08048] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.003783765s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr
E1221 18:13:41.909310   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr: (4.550847351s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr
2023/12/21 18:13:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr: (3.256966123s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.390988009s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-209430
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-209430 image load --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr: (3.879515657s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image save gcr.io/google-containers/addon-resizer:functional-209430 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image rm gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-209430
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-209430 image save --daemon gcr.io/google-containers/addon-resizer:functional-209430 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-209430
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-209430 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.203.215 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-209430 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-209430
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-209430
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-209430
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.98s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-341255 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1221 18:15:03.829551   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-341255 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m24.979441819s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons enable ingress --alsologtostderr -v=5: (15.218990547s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-341255 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-162485 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-162485 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.533610145s)
--- PASS: TestJSONOutput/start/Command (38.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-162485 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-162485 --output=json --user=testUser
E1221 18:19:47.615577   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-162485 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-162485 --output=json --user=testUser: (5.734847207s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-308913 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-308913 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.807175ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60c08dcf-6f9e-4768-8915-1661d5492e53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-308913] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10258aad-bd60-44ef-9085-a17b0aabb355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17848"}}
	{"specversion":"1.0","id":"7c519e29-f383-408c-88d1-eda3fc508dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc3a9906-5de5-44fe-b369-634028662999","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig"}}
	{"specversion":"1.0","id":"764a158d-3c30-4d8e-985f-2c60c0638c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube"}}
	{"specversion":"1.0","id":"3feb28b1-65ae-4d53-8104-3098f72b2f61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"67690123-ec94-4675-9a9d-6d57e169bc05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f948f4d-de70-449f-b827-1801b391bd4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-308913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-308913
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-465095 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-465095 --network=: (36.918455949s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-465095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-465095
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-465095: (1.991583392s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-888720 --network=bridge
E1221 18:21:02.864883   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:02.870148   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:02.880401   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-888720 --network=bridge: (24.647496895s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
E1221 18:21:02.900723   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "docker-network-888720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-888720
E1221 18:21:02.941771   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:03.021910   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:03.182312   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:03.502884   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:04.143815   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-888720: (1.841400344s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.51s)

                                                
                                    
x
+
TestKicExistingNetwork (23.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-855135 --network=existing-network
E1221 18:21:05.423966   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:07.984267   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:09.536446   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:21:13.104685   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:21:23.344895   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-855135 --network=existing-network: (21.686092416s)
helpers_test.go:175: Cleaning up "existing-network-855135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-855135
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-855135: (1.524744723s)
--- PASS: TestKicExistingNetwork (23.34s)

                                                
                                    
x
+
TestKicCustomSubnet (23.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-197765 --subnet=192.168.60.0/24
E1221 18:21:43.825554   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-197765 --subnet=192.168.60.0/24: (21.94158871s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-197765 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-197765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-197765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-197765: (1.682792624s)
--- PASS: TestKicCustomSubnet (23.64s)

                                                
                                    
x
+
TestKicStaticIP (26.5s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-611756 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-611756 --static-ip=192.168.200.200: (24.42337909s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-611756 ip
helpers_test.go:175: Cleaning up "static-ip-611756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-611756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-611756: (1.947694323s)
--- PASS: TestKicStaticIP (26.50s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-575862 --driver=docker  --container-runtime=crio
E1221 18:22:19.987415   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:22:24.786518   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-575862 --driver=docker  --container-runtime=crio: (24.114719039s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-578223 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-578223 --driver=docker  --container-runtime=crio: (21.352016252s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-575862
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-578223
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-578223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-578223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-578223: (1.83200468s)
helpers_test.go:175: Cleaning up "first-575862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-575862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-575862: (2.194494395s)
--- PASS: TestMinikubeProfile (50.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-040885 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-040885 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.912914974s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-040885 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-054226 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-054226 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.727396515s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-040885 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-040885 --alsologtostderr -v=5: (1.584159317s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054226 ssh -- ls /minikube-host
E1221 18:23:25.694405   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-054226
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-054226: (1.205117919s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-054226
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-054226: (6.575843776s)
--- PASS: TestMountStart/serial/RestartStopped (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-054226 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-186629 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1221 18:23:46.707059   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:23:53.377457   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-186629 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.688782521s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-186629 -- rollout status deployment/busybox: (3.751198501s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-pvfqq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-186629 -- exec busybox-5bc68d56bd-qq9gx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-186629 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-186629 -v 3 --alsologtostderr: (18.367676861s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.92s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-186629 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp testdata/cp-test.txt multinode-186629:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3722335659/001/cp-test_multinode-186629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629:/home/docker/cp-test.txt multinode-186629-m02:/home/docker/cp-test_multinode-186629_multinode-186629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test_multinode-186629_multinode-186629-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629:/home/docker/cp-test.txt multinode-186629-m03:/home/docker/cp-test_multinode-186629_multinode-186629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test_multinode-186629_multinode-186629-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp testdata/cp-test.txt multinode-186629-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3722335659/001/cp-test_multinode-186629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m02:/home/docker/cp-test.txt multinode-186629:/home/docker/cp-test_multinode-186629-m02_multinode-186629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test_multinode-186629-m02_multinode-186629.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m02:/home/docker/cp-test.txt multinode-186629-m03:/home/docker/cp-test_multinode-186629-m02_multinode-186629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test_multinode-186629-m02_multinode-186629-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp testdata/cp-test.txt multinode-186629-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3722335659/001/cp-test_multinode-186629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m03:/home/docker/cp-test.txt multinode-186629:/home/docker/cp-test_multinode-186629-m03_multinode-186629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629 "sudo cat /home/docker/cp-test_multinode-186629-m03_multinode-186629.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 cp multinode-186629-m03:/home/docker/cp-test.txt multinode-186629-m02:/home/docker/cp-test_multinode-186629-m03_multinode-186629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 ssh -n multinode-186629-m02 "sudo cat /home/docker/cp-test_multinode-186629-m03_multinode-186629-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-186629 node stop m03: (1.192979153s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-186629 status: exit status 7 (437.556155ms)

                                                
                                                
-- stdout --
	multinode-186629
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-186629-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-186629-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr: exit status 7 (430.852503ms)

                                                
                                                
-- stdout --
	multinode-186629
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-186629-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-186629-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:25:34.498241  116095 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:25:34.498346  116095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:25:34.498354  116095 out.go:309] Setting ErrFile to fd 2...
	I1221 18:25:34.498358  116095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:25:34.498536  116095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:25:34.498695  116095 out.go:303] Setting JSON to false
	I1221 18:25:34.498721  116095 mustload.go:65] Loading cluster: multinode-186629
	I1221 18:25:34.498804  116095 notify.go:220] Checking for updates...
	I1221 18:25:34.499067  116095 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:25:34.499079  116095 status.go:255] checking status of multinode-186629 ...
	I1221 18:25:34.499470  116095 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:25:34.514739  116095 status.go:330] multinode-186629 host status = "Running" (err=<nil>)
	I1221 18:25:34.514757  116095 host.go:66] Checking if "multinode-186629" exists ...
	I1221 18:25:34.514995  116095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629
	I1221 18:25:34.529779  116095 host.go:66] Checking if "multinode-186629" exists ...
	I1221 18:25:34.529997  116095 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:25:34.530040  116095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629
	I1221 18:25:34.546090  116095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32849 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629/id_rsa Username:docker}
	I1221 18:25:34.625564  116095 ssh_runner.go:195] Run: systemctl --version
	I1221 18:25:34.628976  116095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:25:34.638383  116095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:25:34.688829  116095 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-21 18:25:34.680869952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:25:34.689383  116095 kubeconfig.go:92] found "multinode-186629" server: "https://192.168.58.2:8443"
	I1221 18:25:34.689406  116095 api_server.go:166] Checking apiserver status ...
	I1221 18:25:34.689435  116095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1221 18:25:34.699121  116095 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup
	I1221 18:25:34.706868  116095 api_server.go:182] apiserver freezer: "7:freezer:/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio/crio-719ce49243f4b9c64b09331910fea82947609dbc0a0b5e73742a8c2e553b99c9"
	I1221 18:25:34.706919  116095 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cf3b85f473e971f1ad181d8f6cf376d5925a08035e0bd6bdad4ab2f92e2fc3a4/crio/crio-719ce49243f4b9c64b09331910fea82947609dbc0a0b5e73742a8c2e553b99c9/freezer.state
	I1221 18:25:34.713905  116095 api_server.go:204] freezer state: "THAWED"
	I1221 18:25:34.713927  116095 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1221 18:25:34.717844  116095 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1221 18:25:34.717862  116095 status.go:421] multinode-186629 apiserver status = Running (err=<nil>)
	I1221 18:25:34.717870  116095 status.go:257] multinode-186629 status: &{Name:multinode-186629 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:25:34.717882  116095 status.go:255] checking status of multinode-186629-m02 ...
	I1221 18:25:34.718124  116095 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Status}}
	I1221 18:25:34.733691  116095 status.go:330] multinode-186629-m02 host status = "Running" (err=<nil>)
	I1221 18:25:34.733706  116095 host.go:66] Checking if "multinode-186629-m02" exists ...
	I1221 18:25:34.733914  116095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186629-m02
	I1221 18:25:34.748840  116095 host.go:66] Checking if "multinode-186629-m02" exists ...
	I1221 18:25:34.749069  116095 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1221 18:25:34.749099  116095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186629-m02
	I1221 18:25:34.764007  116095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32854 SSHKeyPath:/home/jenkins/minikube-integration/17848-9881/.minikube/machines/multinode-186629-m02/id_rsa Username:docker}
	I1221 18:25:34.849407  116095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1221 18:25:34.858759  116095 status.go:257] multinode-186629-m02 status: &{Name:multinode-186629-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:25:34.858784  116095 status.go:255] checking status of multinode-186629-m03 ...
	I1221 18:25:34.858996  116095 cli_runner.go:164] Run: docker container inspect multinode-186629-m03 --format={{.State.Status}}
	I1221 18:25:34.874250  116095 status.go:330] multinode-186629-m03 host status = "Stopped" (err=<nil>)
	I1221 18:25:34.874269  116095 status.go:343] host is not running, skipping remaining checks
	I1221 18:25:34.874276  116095 status.go:257] multinode-186629-m03 status: &{Name:multinode-186629-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-186629 node start m03 --alsologtostderr: (10.140219284s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-186629
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-186629
E1221 18:26:02.864911   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-186629: (24.839767281s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-186629 --wait=true -v=8 --alsologtostderr
E1221 18:26:30.547556   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:27:19.987798   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-186629 --wait=true -v=8 --alsologtostderr: (1m31.072980048s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-186629
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-186629 node delete m03: (4.050194388s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-186629 stop: (23.580513002s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-186629 status: exit status 7 (88.460934ms)

                                                
                                                
-- stdout --
	multinode-186629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-186629-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr: exit status 7 (91.373788ms)

                                                
                                                
-- stdout --
	multinode-186629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-186629-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:28:10.014808  126225 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:28:10.015061  126225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:28:10.015071  126225 out.go:309] Setting ErrFile to fd 2...
	I1221 18:28:10.015075  126225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:28:10.015264  126225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:28:10.015426  126225 out.go:303] Setting JSON to false
	I1221 18:28:10.015455  126225 mustload.go:65] Loading cluster: multinode-186629
	I1221 18:28:10.015566  126225 notify.go:220] Checking for updates...
	I1221 18:28:10.015959  126225 config.go:182] Loaded profile config "multinode-186629": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:28:10.015977  126225 status.go:255] checking status of multinode-186629 ...
	I1221 18:28:10.016510  126225 cli_runner.go:164] Run: docker container inspect multinode-186629 --format={{.State.Status}}
	I1221 18:28:10.033935  126225 status.go:330] multinode-186629 host status = "Stopped" (err=<nil>)
	I1221 18:28:10.033956  126225 status.go:343] host is not running, skipping remaining checks
	I1221 18:28:10.033963  126225 status.go:257] multinode-186629 status: &{Name:multinode-186629 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1221 18:28:10.033984  126225 status.go:255] checking status of multinode-186629-m02 ...
	I1221 18:28:10.034197  126225 cli_runner.go:164] Run: docker container inspect multinode-186629-m02 --format={{.State.Status}}
	I1221 18:28:10.049316  126225 status.go:330] multinode-186629-m02 host status = "Stopped" (err=<nil>)
	I1221 18:28:10.049347  126225 status.go:343] host is not running, skipping remaining checks
	I1221 18:28:10.049354  126225 status.go:257] multinode-186629-m02 status: &{Name:multinode-186629-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (74.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-186629 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1221 18:28:25.694106   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:28:43.032012   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-186629 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.636425603s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-186629 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (74.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-186629
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-186629-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-186629-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.278291ms)

                                                
                                                
-- stdout --
	* [multinode-186629-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-186629-m02' is duplicated with machine name 'multinode-186629-m02' in profile 'multinode-186629'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-186629-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-186629-m03 --driver=docker  --container-runtime=crio: (22.990261319s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-186629
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-186629: exit status 80 (261.891341ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-186629
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-186629-m03 already exists in multinode-186629-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-186629-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-186629-m03: (1.861023286s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.24s)

                                                
                                    
x
+
TestPreload (152.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-174584 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1221 18:31:02.865395   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-174584 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m10.802898577s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-174584 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-174584 image pull gcr.io/k8s-minikube/busybox: (2.913455529s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-174584
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-174584: (5.652119315s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-174584 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1221 18:32:19.987068   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-174584 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m10.856563514s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-174584 image list
helpers_test.go:175: Cleaning up "test-preload-174584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-174584
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-174584: (2.248172552s)
--- PASS: TestPreload (152.69s)

                                                
                                    
x
+
TestScheduledStopUnix (99.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-591439 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-591439 --memory=2048 --driver=docker  --container-runtime=crio: (23.506931877s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-591439 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-591439 -n scheduled-stop-591439
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-591439 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-591439 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-591439 -n scheduled-stop-591439
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-591439
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-591439 --schedule 15s
E1221 18:33:25.694508   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-591439
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-591439: exit status 7 (72.367967ms)

                                                
                                                
-- stdout --
	scheduled-stop-591439
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-591439 -n scheduled-stop-591439
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-591439 -n scheduled-stop-591439: exit status 7 (72.285048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-591439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-591439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-591439: (4.176098079s)
--- PASS: TestScheduledStopUnix (99.04s)

                                                
                                    
x
+
TestInsufficientStorage (10.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-725343 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-725343 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.901340166s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d35f14d-4e47-4c77-939c-3bc458fd925c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-725343] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24c79de8-f81d-4a0f-9d45-9da12094cdd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17848"}}
	{"specversion":"1.0","id":"5d27f808-2127-4577-9693-510495a6478a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e0bf75c-0084-497c-9332-44a605f95bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig"}}
	{"specversion":"1.0","id":"89b27c25-f7ae-410b-9e2c-310ceeb93755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube"}}
	{"specversion":"1.0","id":"185e8bf6-02f1-4b7b-9b39-febdc0c7e0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"66c8bfbf-835b-4b6d-bf17-a4a7e2390297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9b25347e-3401-440a-af53-97b4ad26b619","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b045f3d0-33da-43f5-8346-cb631b04713c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a3896a48-080e-44db-89d8-32297192348e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f35d7d7-ae6a-4921-bffc-a04445f54cfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"97310f1b-2fc0-4a59-857d-eda0bad1169f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-725343 in cluster insufficient-storage-725343","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"92dcfd08-09bc-4182-865b-abded3b4e835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702920864-17822 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c05ad399-f712-4ad1-a0af-261d8491b381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3a07f12-25d0-4a31-8552-8dd39bb1c345","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-725343 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-725343 --output=json --layout=cluster: exit status 7 (261.650558ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-725343","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-725343","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 18:34:14.849914  147681 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-725343" does not appear in /home/jenkins/minikube-integration/17848-9881/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-725343 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-725343 --output=json --layout=cluster: exit status 7 (265.872169ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-725343","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-725343","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1221 18:34:15.116843  147772 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-725343" does not appear in /home/jenkins/minikube-integration/17848-9881/kubeconfig
	E1221 18:34:15.125950  147772 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/insufficient-storage-725343/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-725343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-725343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-725343: (1.81510923s)
--- PASS: TestInsufficientStorage (10.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1221 18:36:02.864314   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.956414224s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-244507
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-244507: (5.167558687s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-244507 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-244507 status --format={{.Host}}: exit status 7 (90.08729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.061598016s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-244507 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (75.922339ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-244507] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-244507
	    minikube start -p kubernetes-upgrade-244507 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2445072 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-244507 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-244507 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.106924134s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-244507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-244507
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-244507: (2.336323384s)
--- PASS: TestKubernetesUpgrade (368.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2806668296.exe start -p missing-upgrade-300091 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2806668296.exe start -p missing-upgrade-300091 --memory=2200 --driver=docker  --container-runtime=crio: (1m46.845738282s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-300091
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-300091
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-300091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-300091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.165176977s)
helpers_test.go:175: Cleaning up "missing-upgrade-300091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-300091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-300091: (2.213508653s)
--- PASS: TestMissingContainerUpgrade (172.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (105.02018ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-825350] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-825350 --driver=docker  --container-runtime=crio
E1221 18:34:48.738435   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-825350 --driver=docker  --container-runtime=crio: (36.983732191s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-825350 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --driver=docker  --container-runtime=crio: (6.960152302s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-825350 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-825350 status -o json: exit status 2 (324.721806ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-825350","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-825350
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-825350: (2.028804841s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-825350 --no-kubernetes --driver=docker  --container-runtime=crio: (5.310249617s)
--- PASS: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-825350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-825350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.795192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-825350
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-825350: (1.221537218s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-050109 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-050109 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.071753ms)

                                                
                                                
-- stdout --
	* [false-050109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17848
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1221 18:35:10.932630  166253 out.go:296] Setting OutFile to fd 1 ...
	I1221 18:35:10.932810  166253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:35:10.932819  166253 out.go:309] Setting ErrFile to fd 2...
	I1221 18:35:10.932824  166253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1221 18:35:10.932999  166253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17848-9881/.minikube/bin
	I1221 18:35:10.933564  166253 out.go:303] Setting JSON to false
	I1221 18:35:10.935093  166253 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4658,"bootTime":1703179053,"procs":901,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1221 18:35:10.935158  166253 start.go:138] virtualization: kvm guest
	I1221 18:35:10.937790  166253 out.go:177] * [false-050109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1221 18:35:10.939446  166253 out.go:177]   - MINIKUBE_LOCATION=17848
	I1221 18:35:10.939460  166253 notify.go:220] Checking for updates...
	I1221 18:35:10.940991  166253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1221 18:35:10.942410  166253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17848-9881/kubeconfig
	I1221 18:35:10.943693  166253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17848-9881/.minikube
	I1221 18:35:10.944874  166253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1221 18:35:10.946062  166253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1221 18:35:10.947714  166253 config.go:182] Loaded profile config "NoKubernetes-825350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1221 18:35:10.947803  166253 config.go:182] Loaded profile config "cert-expiration-871049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:35:10.947893  166253 config.go:182] Loaded profile config "cert-options-261760": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1221 18:35:10.947967  166253 driver.go:392] Setting default libvirt URI to qemu:///system
	I1221 18:35:10.969831  166253 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1221 18:35:10.969981  166253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1221 18:35:11.027564  166253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-21 18:35:11.018868929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648054272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1221 18:35:11.027656  166253 docker.go:295] overlay module found
	I1221 18:35:11.029331  166253 out.go:177] * Using the docker driver based on user configuration
	I1221 18:35:11.030736  166253 start.go:298] selected driver: docker
	I1221 18:35:11.030749  166253 start.go:902] validating driver "docker" against <nil>
	I1221 18:35:11.030759  166253 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1221 18:35:11.033062  166253 out.go:177] 
	W1221 18:35:11.034636  166253 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1221 18:35:11.035887  166253 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-050109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-871049
contexts:
- context:
cluster: cert-expiration-871049
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-871049
name: cert-expiration-871049
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-871049
user:
client-certificate: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.crt
client-key: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-050109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-050109"

                                                
                                                
----------------------- debugLogs end: false-050109 [took: 3.438916361s] --------------------------------
helpers_test.go:175: Cleaning up "false-050109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-050109
--- PASS: TestNetworkPlugins/group/false (3.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-825350 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-825350 --driver=docker  --container-runtime=crio: (7.065778335s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-825350 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-825350 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.766305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-276178
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.50s)

                                                
                                    
x
+
TestPause/serial/Start (51.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-585779 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-585779 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.010038095s)
--- PASS: TestPause/serial/Start (51.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1221 18:38:25.694455   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.234192213s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.951359166s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-585779 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-585779 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.395642399s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-spkfc" [cee6d59a-2fbd-4c0c-af33-04b30c127e29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-spkfc" [cee6d59a-2fbd-4c0c-af33-04b30c127e29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004024472s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-585779 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-585779 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-585779 --output=json --layout=cluster: exit status 2 (305.874703ms)

                                                
                                                
-- stdout --
	{"Name":"pause-585779","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-585779","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-585779 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-585779 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.55s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-585779 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-585779 --alsologtostderr -v=5: (2.55241205s)
--- PASS: TestPause/serial/DeletePaused (2.55s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.164827307s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-585779
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-585779: exit status 1 (18.480311ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-585779: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.088459657s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.390014875s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xwxbf" [a75a62d7-de81-427a-b612-a45ce9b18048] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004832711s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkl5n" [41cd7036-ac75-48cb-908f-a333e0d6165f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkl5n" [41cd7036-ac75-48cb-908f-a333e0d6165f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003445889s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.046219687s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hm2dg" [9937e175-a4e3-41dc-9c46-24ec7a0a89e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1221 18:41:02.864840   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hm2dg" [9937e175-a4e3-41dc-9c46-24ec7a0a89e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004112707s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f9gqd" [66f29667-67b2-4c26-a514-969e19d6349f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005266836s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-httcb" [2f0608d9-14fc-4496-aeca-3139728c79d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-httcb" [2f0608d9-14fc-4496-aeca-3139728c79d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004192549s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.225316726s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-050109 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.100724751s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-h2lct" [de8c4867-f33e-4c97-b7e4-cbfc8402942f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004449645s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-614001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-614001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m3.457026305s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n8ddj" [adfce911-1b8e-4abb-a02d-65d32cf748f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n8ddj" [adfce911-1b8e-4abb-a02d-65d32cf748f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004104549s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r2bqg" [f258c7cb-a6c5-444b-8b59-e53eec4cdec2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r2bqg" [f258c7cb-a6c5-444b-8b59-e53eec4cdec2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00449395s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-050109 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-050109 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-29gmg" [32104dd0-b0a9-4ef4-be18-99130c714586] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-29gmg" [32104dd0-b0a9-4ef4-be18-99130c714586] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003658374s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-050109 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-050109 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E1221 18:46:45.216165   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:46:47.777039   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:46:48.098422   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:52.898225   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:47:03.138617   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:47:08.895607   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:08.900861   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:08.911128   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:08.931368   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:08.971741   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:09.051994   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:09.212860   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:09.533459   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:10.174010   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:11.454801   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:12.268654   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:47:13.211432   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.216679   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.227435   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.247667   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.287925   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.368237   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.528655   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:13.849182   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:14.015791   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:14.489353   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:15.769738   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:18.330307   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:19.136737   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:19.987097   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:47:20.877287   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:47:23.450988   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:23.619270   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:47:29.058880   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:47:29.377351   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:33.691837   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:47:47.745078   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:47:49.858429   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:47:54.172975   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:48:04.579866   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:48:25.693902   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
E1221 18:48:30.818704   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/bridge-050109/client.crt: no such file or directory
E1221 18:48:35.133368   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
E1221 18:48:42.798381   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:48:50.117177   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.122436   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.132703   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.152894   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.193164   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.273451   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.433869   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.754393   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:50.979685   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:48:51.395021   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:52.675721   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:48:55.236695   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:49:00.357398   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
E1221 18:49:10.597781   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-682214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-682214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (57.179509816s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-096617 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-096617 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (42.581566876s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-987347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-987347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m7.212737248s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-682214 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [83457a21-758c-4cf5-86c5-36d3d782fd9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [83457a21-758c-4cf5-86c5-36d3d782fd9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004207444s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-682214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-096617 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [26d0579c-60d2-4041-927d-c8e87ec6aef0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1221 18:43:25.694466   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/functional-209430/client.crt: no such file or directory
helpers_test.go:344: "busybox" [26d0579c-60d2-4041-927d-c8e87ec6aef0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003713642s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-096617 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-096617 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-096617 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-682214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-682214 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-096617 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-096617 --alsologtostderr -v=3: (12.078496334s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-682214 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-682214 --alsologtostderr -v=3: (12.006082575s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-682214 -n no-preload-682214
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-682214 -n no-preload-682214: exit status 7 (83.258052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-682214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096617 -n embed-certs-096617
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096617 -n embed-certs-096617: exit status 7 (82.927938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-096617 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (607.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-682214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-682214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m6.780059207s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-682214 -n no-preload-682214
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (607.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-096617 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-096617 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.025126725s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-096617 -n embed-certs-096617
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-614001 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [01bf7ed5-002f-49d4-820d-07bfbe4a740a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [01bf7ed5-002f-49d4-820d-07bfbe4a740a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003419699s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-614001 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987347 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [66ac9157-08fc-416f-8499-e97a63f3804f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [66ac9157-08fc-416f-8499-e97a63f3804f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003916966s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-987347 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-614001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-614001 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-614001 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-614001 --alsologtostderr -v=3: (11.900252714s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-987347 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-987347 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-987347 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-987347 --alsologtostderr -v=3: (12.075000297s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614001 -n old-k8s-version-614001
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614001 -n old-k8s-version-614001: exit status 7 (75.001155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-614001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (65.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-614001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-614001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m5.3939115s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-614001 -n old-k8s-version-614001
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (65.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347: exit status 7 (81.193302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-987347 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-987347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1221 18:44:28.424735   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.429971   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.440200   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.460444   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.500743   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.581004   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:28.741319   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:29.062236   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:29.702767   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:30.983229   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:33.543951   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:38.665115   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:44:48.906058   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:45:03.903884   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:03.909174   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:03.919421   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:03.939639   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:03.979877   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:04.060201   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:04.220624   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:04.541253   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:05.181930   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:06.462135   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:09.022890   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:09.386458   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:45:14.143381   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-987347 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m37.745484443s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wlvlj" [04defbfa-2c6d-4a1b-89ef-88e480e69046] Running
E1221 18:45:23.032879   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/addons-443778/client.crt: no such file or directory
E1221 18:45:24.383501   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003221001s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wlvlj" [04defbfa-2c6d-4a1b-89ef-88e480e69046] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003698928s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-614001 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-614001 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-614001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614001 -n old-k8s-version-614001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614001 -n old-k8s-version-614001: exit status 2 (288.843427ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614001 -n old-k8s-version-614001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614001 -n old-k8s-version-614001: exit status 2 (290.165503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-614001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-614001 -n old-k8s-version-614001
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-614001 -n old-k8s-version-614001
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-794063 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1221 18:45:44.863780   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:45:50.347498   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:45:58.953525   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:58.958788   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:58.969039   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:58.989302   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:59.029571   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:59.109816   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:59.270183   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:45:59.591215   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:00.231828   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:01.512971   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:02.864188   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
E1221 18:46:04.074024   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:07.138388   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.143635   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.153900   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.174166   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.214425   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.294797   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.455961   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:07.776540   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:08.416874   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:09.194936   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:09.697019   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-794063 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (36.112599062s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-794063 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1221 18:46:12.257392   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-794063 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-794063 --alsologtostderr -v=3: (1.219756004s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794063 -n newest-cni-794063
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794063 -n newest-cni-794063: exit status 7 (77.199612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-794063 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-794063 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1221 18:46:17.377692   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
E1221 18:46:19.435668   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
E1221 18:46:25.824320   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
E1221 18:46:27.617933   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/calico-050109/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-794063 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.250561189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794063 -n newest-cni-794063
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-794063 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-794063 --alsologtostderr -v=1
E1221 18:46:39.916487   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/custom-flannel-050109/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794063 -n newest-cni-794063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794063 -n newest-cni-794063: exit status 2 (291.080374ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-794063 -n newest-cni-794063
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-794063 -n newest-cni-794063: exit status 2 (293.352881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-794063 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794063 -n newest-cni-794063
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-794063 -n newest-cni-794063
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tpbsf" [c74a5b34-af4c-4546-a7c4-20029873a4f2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1221 18:49:26.500296   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/flannel-050109/client.crt: no such file or directory
E1221 18:49:28.424422   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tpbsf" [c74a5b34-af4c-4546-a7c4-20029873a4f2] Running
E1221 18:49:31.077992   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003604556s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-tpbsf" [c74a5b34-af4c-4546-a7c4-20029873a4f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003282247s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-096617 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-096617 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-096617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096617 -n embed-certs-096617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096617 -n embed-certs-096617: exit status 2 (283.420748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096617 -n embed-certs-096617
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096617 -n embed-certs-096617: exit status 2 (291.975155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-096617 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-096617 -n embed-certs-096617
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-096617 -n embed-certs-096617
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-88r6b" [0e09f422-97ad-4775-b73a-efd832e0c788] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1221 18:49:56.109430   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/auto-050109/client.crt: no such file or directory
E1221 18:49:57.054153   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/enable-default-cni-050109/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-88r6b" [0e09f422-97ad-4775-b73a-efd832e0c788] Running
E1221 18:50:03.904524   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/kindnet-050109/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004054773s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-88r6b" [0e09f422-97ad-4775-b73a-efd832e0c788] Running
E1221 18:50:12.039150   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/old-k8s-version-614001/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006117s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-987347 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-987347 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-987347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347: exit status 2 (282.405495ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347: exit status 2 (292.783204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-987347 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-987347 -n default-k8s-diff-port-987347
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qchzv" [9db0cfed-03b9-44c3-9b92-f25944e683a0] Running
E1221 18:53:54.837671   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:54.842938   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:54.853170   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:54.873458   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:54.913748   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:54.994069   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:55.154453   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:55.474673   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:56.115688   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
E1221 18:53:57.395864   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004033788s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qchzv" [9db0cfed-03b9-44c3-9b92-f25944e683a0] Running
E1221 18:53:59.956414   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004332408s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-682214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-682214 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-682214 --alsologtostderr -v=1
E1221 18:54:05.077272   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/default-k8s-diff-port-987347/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-682214 -n no-preload-682214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-682214 -n no-preload-682214: exit status 2 (278.939773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-682214 -n no-preload-682214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-682214 -n no-preload-682214: exit status 2 (282.491501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-682214 --alsologtostderr -v=1
E1221 18:54:05.908332   16664 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/ingress-addon-legacy-341255/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-682214 -n no-preload-682214
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-682214 -n no-preload-682214
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                    

Test skip (27/316)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-050109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-871049
contexts:
- context:
cluster: cert-expiration-871049
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-871049
name: cert-expiration-871049
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-871049
user:
client-certificate: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.crt
client-key: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-050109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-050109"

                                                
                                                
----------------------- debugLogs end: kubenet-050109 [took: 3.42533191s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-050109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-050109
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-050109 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-050109" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17848-9881/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-871049
contexts:
- context:
cluster: cert-expiration-871049
extensions:
- extension:
last-update: Thu, 21 Dec 2023 18:34:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-871049
name: cert-expiration-871049
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-871049
user:
client-certificate: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.crt
client-key: /home/jenkins/minikube-integration/17848-9881/.minikube/profiles/cert-expiration-871049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-050109

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-050109" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-050109"

                                                
                                                
----------------------- debugLogs end: cilium-050109 [took: 3.644103744s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-050109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-050109
--- SKIP: TestNetworkPlugins/group/cilium (3.83s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-854809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-854809
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard