Test Report: Docker_Linux_crio 17761

                    
                      4145ffc8c3ff629bd64b588eb0db70699e9f5232:2023-12-12:32257
                    
                

Test fail (6/315)

Order failed test Duration
35 TestAddons/parallel/Ingress 155.37
41 TestAddons/parallel/Headlamp 2.28
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 178.68
216 TestMultiNode/serial/PingHostFrom2Pods 3.3
238 TestRunningBinaryUpgrade 75.03
246 TestStoppedBinaryUpgrade/Upgrade 94.31
x
+
TestAddons/parallel/Ingress (155.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-818905 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-818905 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-818905 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7d1e0103-93fa-4266-b3cd-98285a37ea51] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7d1e0103-93fa-4266-b3cd-98285a37ea51] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.098719316s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-818905 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.093506978s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-818905 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-818905 addons disable ingress --alsologtostderr -v=1: (7.634360214s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-818905
helpers_test.go:235: (dbg) docker inspect addons-818905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a",
	        "Created": "2023-12-12T22:03:15.59381978Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:03:15.900885909Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/hosts",
	        "LogPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a-json.log",
	        "Name": "/addons-818905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-818905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-818905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e-init/diff:/var/lib/docker/overlay2/315943c5fbce6bf5205163f366377908e1fa1e507321eff7fb62256fbf325087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-818905",
	                "Source": "/var/lib/docker/volumes/addons-818905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-818905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-818905",
	                "name.minikube.sigs.k8s.io": "addons-818905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44f8d23852dbe637e6958c0038ab91ba8bc5974c5bb86adccab2e9f517474532",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44f8d23852db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-818905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c3e9db620dfd",
	                        "addons-818905"
	                    ],
	                    "NetworkID": "62cff3dd1908cdc8b0cac8152bda41ab72e4c63f6f1286b803d21d1d3261d680",
	                    "EndpointID": "9c73d266e6ed0c9b8ae901967fd9ba75f632cc202bd6b09770a62eb75973d61a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-818905 -n addons-818905
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-818905 logs -n 25: (1.144264421s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-479271                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| delete  | -p download-only-479271                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| start   | --download-only -p                                                                          | download-docker-042208 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | download-docker-042208                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-042208                                                                   | download-docker-042208 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-328563   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | binary-mirror-328563                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36103                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-328563                                                                     | binary-mirror-328563   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| addons  | disable dashboard -p                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-818905 --wait=true                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | -p addons-818905                                                                            |                        |         |         |                     |                     |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-818905 ssh cat                                                                       | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | /opt/local-path-provisioner/pvc-63257cc8-df89-4d4d-9324-970810f80368_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-818905 addons                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-818905 ip                                                                            | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | -p addons-818905                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-818905 ssh curl -s                                                                   | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| addons  | addons-818905 addons                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:06 UTC | 12 Dec 23 22:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-818905 addons                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:06 UTC | 12 Dec 23 22:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-818905 ip                                                                            | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:07 UTC |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:07 UTC | 12 Dec 23 22:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:54.281932   17479 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:54.282094   17479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:54.282104   17479 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:54.282108   17479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:54.282329   17479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:02:54.282956   17479 out.go:303] Setting JSON to false
	I1212 22:02:54.283781   17479 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2726,"bootTime":1702415848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:54.283839   17479 start.go:138] virtualization: kvm guest
	I1212 22:02:54.286219   17479 out.go:177] * [addons-818905] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:54.287719   17479 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:02:54.289128   17479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:54.287718   17479 notify.go:220] Checking for updates...
	I1212 22:02:54.292015   17479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:02:54.293538   17479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:02:54.295033   17479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:02:54.296426   17479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:02:54.297885   17479 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:02:54.317317   17479 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:02:54.317424   17479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:54.365132   17479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:54.356923234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:54.365243   17479 docker.go:295] overlay module found
	I1212 22:02:54.368022   17479 out.go:177] * Using the docker driver based on user configuration
	I1212 22:02:54.369482   17479 start.go:298] selected driver: docker
	I1212 22:02:54.369500   17479 start.go:902] validating driver "docker" against <nil>
	I1212 22:02:54.369513   17479 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:02:54.370756   17479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:54.421619   17479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:54.413375363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:54.421756   17479 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:02:54.421967   17479 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:02:54.423913   17479 out.go:177] * Using Docker driver with root privileges
	I1212 22:02:54.425274   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:02:54.425289   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:02:54.425297   17479 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:02:54.425306   17479 start_flags.go:323] config:
	{Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:54.426900   17479 out.go:177] * Starting control plane node addons-818905 in cluster addons-818905
	I1212 22:02:54.428221   17479 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:02:54.429551   17479 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:02:54.430776   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:54.430804   17479 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:54.430810   17479 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:54.430861   17479 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:02:54.430892   17479 preload.go:174] Found /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:02:54.430903   17479 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:02:54.431308   17479 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json ...
	I1212 22:02:54.431333   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json: {Name:mkdb6ed98b8cb72753cbb97152ca6748f00feee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:02:54.445142   17479 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:54.445260   17479 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 22:02:54.445278   17479 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 22:02:54.445282   17479 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 22:02:54.445290   17479 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 22:02:54.445297   17479 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 from local cache
	I1212 22:03:07.208869   17479 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 from cached tarball
	I1212 22:03:07.208919   17479 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:03:07.208957   17479 start.go:365] acquiring machines lock for addons-818905: {Name:mk5192f60678fae1daf6eb7075e7a56b8fe6e5da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:03:07.209057   17479 start.go:369] acquired machines lock for "addons-818905" in 81.749µs
	I1212 22:03:07.209082   17479 start.go:93] Provisioning new machine with config: &{Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:07.209151   17479 start.go:125] createHost starting for "" (driver="docker")
	I1212 22:03:07.290277   17479 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1212 22:03:07.290555   17479 start.go:159] libmachine.API.Create for "addons-818905" (driver="docker")
	I1212 22:03:07.290592   17479 client.go:168] LocalClient.Create starting
	I1212 22:03:07.290748   17479 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem
	I1212 22:03:07.398915   17479 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem
	I1212 22:03:07.659756   17479 cli_runner.go:164] Run: docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 22:03:07.675004   17479 cli_runner.go:211] docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 22:03:07.675082   17479 network_create.go:281] running [docker network inspect addons-818905] to gather additional debugging logs...
	I1212 22:03:07.675109   17479 cli_runner.go:164] Run: docker network inspect addons-818905
	W1212 22:03:07.691431   17479 cli_runner.go:211] docker network inspect addons-818905 returned with exit code 1
	I1212 22:03:07.691466   17479 network_create.go:284] error running [docker network inspect addons-818905]: docker network inspect addons-818905: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-818905 not found
	I1212 22:03:07.691481   17479 network_create.go:286] output of [docker network inspect addons-818905]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-818905 not found
	
	** /stderr **
	I1212 22:03:07.691620   17479 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:03:07.706942   17479 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002866a80}
	I1212 22:03:07.706986   17479 network_create.go:124] attempt to create docker network addons-818905 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 22:03:07.707034   17479 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-818905 addons-818905
	I1212 22:03:07.981158   17479 network_create.go:108] docker network addons-818905 192.168.49.0/24 created
	I1212 22:03:07.981190   17479 kic.go:121] calculated static IP "192.168.49.2" for the "addons-818905" container
	I1212 22:03:07.981247   17479 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 22:03:07.995904   17479 cli_runner.go:164] Run: docker volume create addons-818905 --label name.minikube.sigs.k8s.io=addons-818905 --label created_by.minikube.sigs.k8s.io=true
	I1212 22:03:08.041288   17479 oci.go:103] Successfully created a docker volume addons-818905
	I1212 22:03:08.041380   17479 cli_runner.go:164] Run: docker run --rm --name addons-818905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --entrypoint /usr/bin/test -v addons-818905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 22:03:10.435995   17479 cli_runner.go:217] Completed: docker run --rm --name addons-818905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --entrypoint /usr/bin/test -v addons-818905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib: (2.394563419s)
	I1212 22:03:10.436030   17479 oci.go:107] Successfully prepared a docker volume addons-818905
	I1212 22:03:10.436055   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:10.436073   17479 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 22:03:10.436128   17479 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-818905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 22:03:15.525392   17479 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-818905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (5.089205861s)
	I1212 22:03:15.525424   17479 kic.go:203] duration metric: took 5.089347 seconds to extract preloaded images to volume
	W1212 22:03:15.525560   17479 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 22:03:15.525672   17479 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 22:03:15.579967   17479 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-818905 --name addons-818905 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-818905 --network addons-818905 --ip 192.168.49.2 --volume addons-818905:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:03:15.908593   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Running}}
	I1212 22:03:15.925239   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:15.941593   17479 cli_runner.go:164] Run: docker exec addons-818905 stat /var/lib/dpkg/alternatives/iptables
	I1212 22:03:15.992659   17479 oci.go:144] the created container "addons-818905" has a running status.
	I1212 22:03:15.992687   17479 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa...
	I1212 22:03:16.078873   17479 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 22:03:16.097774   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:16.113584   17479 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 22:03:16.113601   17479 kic_runner.go:114] Args: [docker exec --privileged addons-818905 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 22:03:16.173009   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:16.190198   17479 machine.go:88] provisioning docker machine ...
	I1212 22:03:16.190229   17479 ubuntu.go:169] provisioning hostname "addons-818905"
	I1212 22:03:16.190276   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:16.208215   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:16.208554   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:16.208570   17479 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-818905 && echo "addons-818905" | sudo tee /etc/hostname
	I1212 22:03:16.210225   17479 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55468->127.0.0.1:32772: read: connection reset by peer
	I1212 22:03:19.341346   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-818905
	
	I1212 22:03:19.341422   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.357565   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:19.357909   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:19.357926   17479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-818905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-818905/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-818905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:03:19.475035   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:03:19.475064   17479 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:03:19.475105   17479 ubuntu.go:177] setting up certificates
	I1212 22:03:19.475116   17479 provision.go:83] configureAuth start
	I1212 22:03:19.475165   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:19.491346   17479 provision.go:138] copyHostCerts
	I1212 22:03:19.491418   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:03:19.491571   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:03:19.491774   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:03:19.491895   17479 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.addons-818905 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-818905]
	I1212 22:03:19.556879   17479 provision.go:172] copyRemoteCerts
	I1212 22:03:19.556942   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:03:19.556972   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.573323   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:19.663758   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:03:19.684477   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:03:19.704958   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:03:19.725486   17479 provision.go:86] duration metric: configureAuth took 250.355277ms
	I1212 22:03:19.725516   17479 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:03:19.725691   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:19.725804   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.741380   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:19.741744   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:19.741766   17479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:03:19.944446   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:03:19.944471   17479 machine.go:91] provisioned docker machine in 3.754252641s
	I1212 22:03:19.944482   17479 client.go:171] LocalClient.Create took 12.653879994s
	I1212 22:03:19.944501   17479 start.go:167] duration metric: libmachine.API.Create for "addons-818905" took 12.653946454s
	I1212 22:03:19.944514   17479 start.go:300] post-start starting for "addons-818905" (driver="docker")
	I1212 22:03:19.944532   17479 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:03:19.944590   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:03:19.944635   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.962470   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.051667   17479 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:03:20.054478   17479 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:03:20.054505   17479 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:03:20.054513   17479 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:03:20.054520   17479 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 22:03:20.054528   17479 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:03:20.054577   17479 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:03:20.054598   17479 start.go:303] post-start completed in 110.077707ms
	I1212 22:03:20.054830   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:20.071099   17479 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json ...
	I1212 22:03:20.071315   17479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:03:20.071376   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.085574   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.171751   17479 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:03:20.175625   17479 start.go:128] duration metric: createHost completed in 12.966458256s
	I1212 22:03:20.175650   17479 start.go:83] releasing machines lock for "addons-818905", held for 12.966580453s
	I1212 22:03:20.175717   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:20.191539   17479 ssh_runner.go:195] Run: cat /version.json
	I1212 22:03:20.191616   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.191665   17479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:03:20.191730   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.208989   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.209596   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.382225   17479 ssh_runner.go:195] Run: systemctl --version
	I1212 22:03:20.385937   17479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:03:20.519348   17479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:03:20.523296   17479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:03:20.540115   17479 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:03:20.540188   17479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:03:20.565506   17479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 22:03:20.565541   17479 start.go:475] detecting cgroup driver to use...
	I1212 22:03:20.565572   17479 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:03:20.565608   17479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:03:20.578209   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:03:20.587312   17479 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:03:20.587384   17479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:03:20.598406   17479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:03:20.610319   17479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:03:20.691564   17479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:03:20.768294   17479 docker.go:219] disabling docker service ...
	I1212 22:03:20.768376   17479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:03:20.784354   17479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:03:20.793850   17479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:03:20.857767   17479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:03:20.932360   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:03:20.942532   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:03:20.955958   17479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:03:20.956009   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.964362   17479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:03:20.964412   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.972229   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.980349   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.988305   17479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:03:20.995944   17479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:03:21.002558   17479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:03:21.009458   17479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:03:21.082866   17479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:03:21.185364   17479 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:03:21.185438   17479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:03:21.188798   17479 start.go:543] Will wait 60s for crictl version
	I1212 22:03:21.188843   17479 ssh_runner.go:195] Run: which crictl
	I1212 22:03:21.191869   17479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:03:21.223224   17479 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 22:03:21.223376   17479 ssh_runner.go:195] Run: crio --version
	I1212 22:03:21.256094   17479 ssh_runner.go:195] Run: crio --version
	I1212 22:03:21.288418   17479 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 22:03:21.289937   17479 cli_runner.go:164] Run: docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:03:21.307137   17479 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 22:03:21.310543   17479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:21.320390   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:21.320436   17479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:21.371859   17479 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:03:21.371882   17479 crio.go:415] Images already preloaded, skipping extraction
	I1212 22:03:21.371931   17479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:21.401469   17479 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:03:21.401493   17479 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:03:21.401557   17479 ssh_runner.go:195] Run: crio config
	I1212 22:03:21.440150   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:03:21.440170   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:03:21.440184   17479 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:03:21.440200   17479 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-818905 NodeName:addons-818905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:03:21.440333   17479 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-818905"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:03:21.440395   17479 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-818905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:03:21.440440   17479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:03:21.448194   17479 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:03:21.448251   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:03:21.455949   17479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1212 22:03:21.470707   17479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:03:21.485502   17479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1212 22:03:21.499965   17479 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 22:03:21.502696   17479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:21.512025   17479 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905 for IP: 192.168.49.2
	I1212 22:03:21.512053   17479 certs.go:190] acquiring lock for shared ca certs: {Name:mkef1e7b14f91e4f04d1e9cbbafdc8c42ba43b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.512169   17479 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key
	I1212 22:03:21.710544   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt ...
	I1212 22:03:21.710574   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt: {Name:mk0e0fd4d038396d1fc2bf31caea05ecaf29aaee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.710732   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key ...
	I1212 22:03:21.710742   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key: {Name:mk674a523a317792bfd36cd4ddb9c74a608f21e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.710808   17479 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key
	I1212 22:03:21.951207   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt ...
	I1212 22:03:21.951239   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt: {Name:mkd352921ab3d8753775b65aa4c4e555b772bbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.951401   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key ...
	I1212 22:03:21.951411   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key: {Name:mka56f39e927e7d629de44160a204845d1ed44d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.951511   17479 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key
	I1212 22:03:21.951530   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt with IP's: []
	I1212 22:03:22.127869   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt ...
	I1212 22:03:22.127897   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: {Name:mk42354b39cf88ed47b666838f3906fc0059e663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.128042   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key ...
	I1212 22:03:22.128052   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key: {Name:mk4c69e12ba3cdd1bb99f1f7b7bcec542ea8ed07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.128126   17479 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2
	I1212 22:03:22.128142   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:03:22.354306   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 ...
	I1212 22:03:22.354336   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2: {Name:mkf0a5ffe152afd7dc2ae2432b33e4908560bb02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.354484   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2 ...
	I1212 22:03:22.354498   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2: {Name:mke7664a0e83520068579d72203d39d736831243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.354560   17479 certs.go:337] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt
	I1212 22:03:22.354620   17479 certs.go:341] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key
	I1212 22:03:22.354661   17479 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key
	I1212 22:03:22.354676   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt with IP's: []
	I1212 22:03:22.595702   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt ...
	I1212 22:03:22.595729   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt: {Name:mk7296c907a900eb3a472d81afb4b249f7081624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.595877   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key ...
	I1212 22:03:22.595887   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key: {Name:mkcf27d57e94511b1f7c217dfcc80b2ae395803b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.596045   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 22:03:22.596079   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:03:22.596098   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:03:22.596129   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem (1675 bytes)
	I1212 22:03:22.596695   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:03:22.617968   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 22:03:22.638254   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:03:22.658123   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:03:22.678442   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:03:22.698139   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:03:22.717686   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:03:22.737245   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:03:22.757440   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:03:22.776860   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:03:22.791189   17479 ssh_runner.go:195] Run: openssl version
	I1212 22:03:22.796246   17479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:03:22.803700   17479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.806582   17479 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.806640   17479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.812476   17479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:03:22.819797   17479 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:03:22.822408   17479 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:03:22.822445   17479 kubeadm.go:404] StartCluster: {Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:03:22.822536   17479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:03:22.822583   17479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:03:22.853248   17479 cri.go:89] found id: ""
	I1212 22:03:22.853301   17479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:03:22.862317   17479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:03:22.869395   17479 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 22:03:22.869443   17479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:03:22.876490   17479 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:03:22.876527   17479 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 22:03:22.948632   17479 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:03:23.005795   17479 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:03:31.823842   17479 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:03:31.823936   17479 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:03:31.824081   17479 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:03:31.824175   17479 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:03:31.824230   17479 kubeadm.go:322] OS: Linux
	I1212 22:03:31.824284   17479 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 22:03:31.824344   17479 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 22:03:31.824408   17479 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 22:03:31.824468   17479 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 22:03:31.824544   17479 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 22:03:31.824619   17479 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 22:03:31.824681   17479 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 22:03:31.824748   17479 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 22:03:31.824820   17479 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 22:03:31.824915   17479 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:03:31.825048   17479 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:03:31.825170   17479 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:03:31.825247   17479 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:03:31.827090   17479 out.go:204]   - Generating certificates and keys ...
	I1212 22:03:31.827199   17479 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:03:31.827302   17479 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:03:31.827400   17479 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:03:31.827501   17479 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:03:31.827611   17479 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:03:31.827711   17479 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:03:31.827795   17479 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:03:31.827951   17479 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-818905 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:03:31.828047   17479 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:03:31.828224   17479 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-818905 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:03:31.828316   17479 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:03:31.828410   17479 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:03:31.828472   17479 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:03:31.828541   17479 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:03:31.828606   17479 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:03:31.828678   17479 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:03:31.828768   17479 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:03:31.828844   17479 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:03:31.828955   17479 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:03:31.829037   17479 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:03:31.830885   17479 out.go:204]   - Booting up control plane ...
	I1212 22:03:31.830991   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:03:31.831087   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:03:31.831174   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:03:31.831346   17479 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:03:31.831464   17479 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:03:31.831517   17479 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:03:31.831772   17479 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:03:31.831871   17479 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502555 seconds
	I1212 22:03:31.832033   17479 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:03:31.832211   17479 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:03:31.832299   17479 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:03:31.832547   17479 kubeadm.go:322] [mark-control-plane] Marking the node addons-818905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:03:31.832628   17479 kubeadm.go:322] [bootstrap-token] Using token: weiybl.9w3njaxhbwpige62
	I1212 22:03:31.834067   17479 out.go:204]   - Configuring RBAC rules ...
	I1212 22:03:31.834195   17479 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:03:31.834295   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:03:31.834480   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:03:31.834622   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:03:31.834777   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:03:31.834882   17479 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:03:31.834971   17479 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:03:31.835006   17479 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:03:31.835043   17479 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:03:31.835048   17479 kubeadm.go:322] 
	I1212 22:03:31.835091   17479 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:03:31.835097   17479 kubeadm.go:322] 
	I1212 22:03:31.835153   17479 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:03:31.835159   17479 kubeadm.go:322] 
	I1212 22:03:31.835217   17479 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:03:31.835300   17479 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:03:31.835363   17479 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:03:31.835373   17479 kubeadm.go:322] 
	I1212 22:03:31.835433   17479 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:03:31.835443   17479 kubeadm.go:322] 
	I1212 22:03:31.835514   17479 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:03:31.835527   17479 kubeadm.go:322] 
	I1212 22:03:31.835626   17479 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:03:31.835727   17479 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:03:31.835822   17479 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:03:31.835833   17479 kubeadm.go:322] 
	I1212 22:03:31.835943   17479 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:03:31.836078   17479 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:03:31.836093   17479 kubeadm.go:322] 
	I1212 22:03:31.836177   17479 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token weiybl.9w3njaxhbwpige62 \
	I1212 22:03:31.836265   17479 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f \
	I1212 22:03:31.836287   17479 kubeadm.go:322] 	--control-plane 
	I1212 22:03:31.836293   17479 kubeadm.go:322] 
	I1212 22:03:31.836360   17479 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:03:31.836367   17479 kubeadm.go:322] 
	I1212 22:03:31.836431   17479 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token weiybl.9w3njaxhbwpige62 \
	I1212 22:03:31.836599   17479 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:03:31.836620   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:03:31.836626   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:03:31.838215   17479 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:03:31.839497   17479 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:03:31.842711   17479 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:03:31.842725   17479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:03:31.857812   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:03:32.483281   17479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:03:32.483323   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.483340   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=addons-818905 minikube.k8s.io/updated_at=2023_12_12T22_03_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.556750   17479 ops.go:34] apiserver oom_adj: -16
	I1212 22:03:32.556945   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.617952   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:33.178601   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:33.678196   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:34.178830   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:34.678852   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:35.178960   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:35.678248   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:36.178160   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:36.678163   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:37.178029   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:37.678298   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:38.178947   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:38.678830   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:39.178362   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:39.678189   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:40.178774   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:40.678496   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:41.178704   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:41.679048   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:42.178716   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:42.678572   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:43.178545   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:43.678950   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.178249   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.241022   17479 kubeadm.go:1088] duration metric: took 11.75774238s to wait for elevateKubeSystemPrivileges.
	I1212 22:03:44.241061   17479 kubeadm.go:406] StartCluster complete in 21.41861949s
	I1212 22:03:44.241083   17479 settings.go:142] acquiring lock: {Name:mk857225ea2f0544984670c00dbb01f431ce59c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:44.241195   17479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:03:44.241542   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/kubeconfig: {Name:mkd3e8de36f0003ff040c445ac6e47a46685daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:44.241704   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:03:44.241777   17479 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 22:03:44.241863   17479 addons.go:69] Setting volumesnapshots=true in profile "addons-818905"
	I1212 22:03:44.241874   17479 addons.go:69] Setting ingress-dns=true in profile "addons-818905"
	I1212 22:03:44.241887   17479 addons.go:231] Setting addon volumesnapshots=true in "addons-818905"
	I1212 22:03:44.241889   17479 addons.go:69] Setting default-storageclass=true in profile "addons-818905"
	I1212 22:03:44.241904   17479 addons.go:231] Setting addon ingress-dns=true in "addons-818905"
	I1212 22:03:44.241912   17479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-818905"
	I1212 22:03:44.241937   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.241912   17479 addons.go:69] Setting helm-tiller=true in profile "addons-818905"
	I1212 22:03:44.241947   17479 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-818905"
	I1212 22:03:44.241966   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.241976   17479 addons.go:69] Setting gcp-auth=true in profile "addons-818905"
	I1212 22:03:44.241997   17479 mustload.go:65] Loading cluster: addons-818905
	I1212 22:03:44.242023   17479 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-818905"
	I1212 22:03:44.242074   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.242229   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:44.242280   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242456   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242465   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242486   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242515   17479 addons.go:69] Setting metrics-server=true in profile "addons-818905"
	I1212 22:03:44.242530   17479 addons.go:231] Setting addon metrics-server=true in "addons-818905"
	I1212 22:03:44.242600   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.242866   17479 addons.go:69] Setting storage-provisioner=true in profile "addons-818905"
	I1212 22:03:44.242883   17479 addons.go:231] Setting addon storage-provisioner=true in "addons-818905"
	I1212 22:03:44.242931   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243031   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243174   17479 addons.go:69] Setting registry=true in profile "addons-818905"
	I1212 22:03:44.243203   17479 addons.go:231] Setting addon registry=true in "addons-818905"
	I1212 22:03:44.243244   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243478   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243745   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243856   17479 addons.go:69] Setting ingress=true in profile "addons-818905"
	I1212 22:03:44.243875   17479 addons.go:231] Setting addon ingress=true in "addons-818905"
	I1212 22:03:44.243909   17479 addons.go:69] Setting cloud-spanner=true in profile "addons-818905"
	I1212 22:03:44.243942   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243949   17479 addons.go:231] Setting addon cloud-spanner=true in "addons-818905"
	I1212 22:03:44.244028   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.244400   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.244474   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.241970   17479 addons.go:231] Setting addon helm-tiller=true in "addons-818905"
	I1212 22:03:44.245339   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.245798   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.241936   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:44.242510   17479 addons.go:69] Setting inspektor-gadget=true in profile "addons-818905"
	I1212 22:03:44.246084   17479 addons.go:231] Setting addon inspektor-gadget=true in "addons-818905"
	I1212 22:03:44.246145   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.246554   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.247399   17479 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-818905"
	I1212 22:03:44.242489   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242503   17479 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-818905"
	I1212 22:03:44.253738   17479 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-818905"
	I1212 22:03:44.253819   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.254340   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.255718   17479 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-818905"
	I1212 22:03:44.256097   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.277171   17479 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 22:03:44.279397   17479 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:44.279419   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 22:03:44.279475   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.282332   17479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:03:44.283890   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 22:03:44.283733   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 22:03:44.283742   17479 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 22:03:44.283857   17479 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:44.285773   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 22:03:44.285890   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:03:44.286601   17479 addons.go:231] Setting addon default-storageclass=true in "addons-818905"
	I1212 22:03:44.287595   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.287774   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 22:03:44.287792   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 22:03:44.287846   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.288011   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 22:03:44.288019   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 22:03:44.288052   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.288124   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.288185   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 22:03:44.288274   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.292686   17479 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 22:03:44.294908   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 22:03:44.294929   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 22:03:44.294996   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.291164   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 22:03:44.296566   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 22:03:44.298040   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 22:03:44.299674   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:44.299644   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 22:03:44.301358   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 22:03:44.304008   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:44.302682   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 22:03:44.306677   17479 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 22:03:44.305810   17479 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:44.307903   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 22:03:44.309160   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.309230   17479 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-818905" context rescaled to 1 replicas
	I1212 22:03:44.309264   17479 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:44.310468   17479 out.go:177] * Verifying Kubernetes components...
	I1212 22:03:44.309348   17479 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 22:03:44.309406   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 22:03:44.309414   17479 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 22:03:44.312748   17479 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 22:03:44.311685   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:03:44.311697   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 22:03:44.314254   17479 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:44.314831   17479 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:44.314869   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316141   17479 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 22:03:44.316154   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 22:03:44.316271   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 22:03:44.316306   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 22:03:44.316344   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316526   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316672   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.321615   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.335671   17479 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-818905"
	I1212 22:03:44.335718   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.336680   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.343629   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.345634   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.348517   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.350122   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.369305   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.382468   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.384304   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.390569   17479 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 22:03:44.389613   17479 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:44.390603   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:03:44.390658   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.392113   17479 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 22:03:44.392133   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 22:03:44.392179   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.391058   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.394101   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.396466   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.399512   17479 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 22:03:44.400771   17479 out.go:177]   - Using image docker.io/busybox:stable
	I1212 22:03:44.402165   17479 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:44.402186   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 22:03:44.402242   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.406883   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.409375   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	W1212 22:03:44.420535   17479 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 22:03:44.420565   17479 retry.go:31] will retry after 223.935691ms: ssh: handshake failed: EOF
	I1212 22:03:44.420698   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:03:44.421091   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.421748   17479 node_ready.go:35] waiting up to 6m0s for node "addons-818905" to be "Ready" ...
	I1212 22:03:44.620512   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:44.725093   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:44.726772   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 22:03:44.726800   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 22:03:44.729801   17479 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 22:03:44.729829   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 22:03:44.737621   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 22:03:44.737657   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 22:03:44.819039   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:44.824135   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:44.829488   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:44.839323   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 22:03:44.839350   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 22:03:44.930219   17479 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:44.930246   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 22:03:44.931426   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 22:03:44.931446   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 22:03:45.019108   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 22:03:45.019140   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 22:03:45.028038   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:45.036984   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 22:03:45.037017   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 22:03:45.128421   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:45.134182   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 22:03:45.134207   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 22:03:45.135538   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 22:03:45.135567   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 22:03:45.323593   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:45.420844   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:45.420882   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 22:03:45.427733   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:45.427811   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 22:03:45.428060   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 22:03:45.428097   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 22:03:45.617325   17479 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 22:03:45.617355   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 22:03:45.635521   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 22:03:45.635599   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 22:03:45.636295   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:45.831671   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 22:03:45.831767   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 22:03:46.018648   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:46.024735   17479 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:46.024822   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 22:03:46.218654   17479 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 22:03:46.218684   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 22:03:46.240281   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 22:03:46.240323   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 22:03:46.518377   17479 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 22:03:46.518462   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 22:03:46.530104   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:46.533961   17479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.113230679s)
	I1212 22:03:46.534005   17479 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 22:03:46.633471   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:46.728484   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 22:03:46.728565   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 22:03:46.935340   17479 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 22:03:46.935420   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 22:03:47.317019   17479 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 22:03:47.317117   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 22:03:47.424861   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 22:03:47.424900   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 22:03:47.624578   17479 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 22:03:47.624611   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 22:03:47.720500   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 22:03:47.720530   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 22:03:47.929777   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 22:03:47.929805   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 22:03:48.117257   17479 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:48.117286   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 22:03:48.138266   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:03:48.138292   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 22:03:48.438589   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:03:48.531973   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:48.819986   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.199419648s)
	I1212 22:03:49.031210   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:49.521074   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.795934485s)
	I1212 22:03:49.521157   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.702079186s)
	I1212 22:03:49.521209   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.6970475s)
	I1212 22:03:50.616928   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.787401721s)
	I1212 22:03:50.616973   17479 addons.go:467] Verifying addon ingress=true in "addons-818905"
	I1212 22:03:50.616995   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.588921069s)
	I1212 22:03:50.618525   17479 out.go:177] * Verifying ingress addon...
	I1212 22:03:50.617072   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.488617244s)
	I1212 22:03:50.617125   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.29349408s)
	I1212 22:03:50.617201   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.980802411s)
	I1212 22:03:50.617264   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.598519196s)
	I1212 22:03:50.617447   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.087304143s)
	W1212 22:03:50.620378   17479 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:03:50.620391   17479 addons.go:467] Verifying addon registry=true in "addons-818905"
	I1212 22:03:50.620405   17479 retry.go:31] will retry after 179.516209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:03:50.620406   17479 addons.go:467] Verifying addon metrics-server=true in "addons-818905"
	I1212 22:03:50.622389   17479 out.go:177] * Verifying registry addon...
	I1212 22:03:50.621287   17479 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 22:03:50.624602   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 22:03:50.628779   17479 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 22:03:50.628807   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 22:03:50.629725   17479 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 22:03:50.630305   17479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:03:50.630323   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:50.632720   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:50.634397   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:50.801071   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:51.136478   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:51.138048   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:51.154694   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 22:03:51.154781   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:51.174436   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:51.333345   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 22:03:51.351068   17479 addons.go:231] Setting addon gcp-auth=true in "addons-818905"
	I1212 22:03:51.351126   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:51.351638   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:51.372837   17479 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 22:03:51.372879   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:51.388380   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:51.518577   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:51.621192   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.182530675s)
	I1212 22:03:51.621238   17479 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-818905"
	I1212 22:03:51.621268   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.089229502s)
	I1212 22:03:51.623274   17479 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 22:03:51.626268   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 22:03:51.631660   17479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:03:51.631724   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:51.635113   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:51.636420   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:51.637584   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:51.934732   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.133612159s)
	I1212 22:03:51.937403   17479 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 22:03:51.938869   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:51.940281   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 22:03:51.940302   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 22:03:51.956441   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 22:03:51.956467   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 22:03:51.971241   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:03:51.971263   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 22:03:51.985967   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:03:52.136500   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:52.138486   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:52.138614   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:52.361517   17479 addons.go:467] Verifying addon gcp-auth=true in "addons-818905"
	I1212 22:03:52.363268   17479 out.go:177] * Verifying gcp-auth addon...
	I1212 22:03:52.365445   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 22:03:52.367995   17479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 22:03:52.368012   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:52.372084   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:52.637145   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:52.638214   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:52.639002   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:52.876195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:53.138146   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:53.138655   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:53.140090   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:53.376424   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:53.639521   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:53.640681   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:53.642443   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:53.918666   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:54.016702   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:54.137716   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:54.139941   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:54.141315   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:54.417408   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:54.637691   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:54.639080   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:54.639494   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:54.875474   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:55.136970   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:55.139619   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:55.139791   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:55.376112   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:55.637398   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:55.638391   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:55.639327   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:55.875503   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:56.136311   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:56.138959   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:56.139147   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:56.376156   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:56.451567   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:56.637246   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:56.639479   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:56.639599   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:56.875355   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:57.136238   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:57.138195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:57.138663   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:57.375483   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:57.637380   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:57.638172   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:57.639258   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:57.874916   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.137371   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:58.138153   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:58.138920   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:58.375688   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.636369   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:58.640881   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:58.640969   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:58.875795   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.951377   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:59.137310   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:59.138814   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:59.138927   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:59.375975   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:59.636694   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:59.638787   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:59.639015   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:59.875475   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:00.136849   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:00.139542   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:00.139740   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:00.375557   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:00.636299   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:00.638565   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:00.638693   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:00.875894   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:01.136696   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:01.138657   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:01.139192   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:01.375107   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:01.451490   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:01.637472   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:01.641281   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:01.642375   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:01.875383   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:02.137287   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:02.138118   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:02.138912   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:02.375443   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:02.637450   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:02.638043   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:02.639175   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:02.876060   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.136709   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:03.138460   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:03.139008   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:03.375434   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.637308   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:03.638396   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:03.639142   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:03.875818   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.951202   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:04.137090   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:04.139374   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:04.139635   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:04.375876   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:04.636626   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:04.638974   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:04.639061   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:04.875788   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:05.136905   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:05.138482   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:05.138615   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:05.375542   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:05.637275   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:05.638719   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:05.641002   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:05.875709   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:06.136438   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.138873   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.138963   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:06.375677   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:06.451098   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:06.636570   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.638738   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.638937   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:06.875893   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:07.136661   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.138569   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:07.138666   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.375626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:07.636244   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.638416   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.638596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:07.875531   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:08.137220   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.138059   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.139195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.374954   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:08.451265   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:08.636932   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.638924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.639056   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.875369   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:09.137703   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.138167   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.139119   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.375128   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:09.637035   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.637981   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.638765   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.875672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.136314   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.138560   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.138642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:10.375226   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.451510   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:10.637163   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.637871   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.638935   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:10.875534   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.137129   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.138013   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.138998   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.375767   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.637018   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.637846   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.638595   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.875596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.136491   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.138466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.138485   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.375024   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.637107   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.638523   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.640401   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.875119   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.951518   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:13.137052   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.137891   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.138803   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.375428   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:13.637178   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.637825   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.638957   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.875645   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.136281   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.138385   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.138675   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.375694   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.636550   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.638693   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.638800   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.875759   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.136305   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.138362   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.138449   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.375231   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.451587   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:15.637404   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.637753   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.638866   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.875280   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.137372   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.138246   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.139171   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.375066   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.637554   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.637839   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.639034   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.875648   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.136463   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.138182   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.138456   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:17.374924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.636872   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.638819   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.639247   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:17.874961   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.951163   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:18.136227   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.138524   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.138678   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.420951   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:18.450906   17479 node_ready.go:49] node "addons-818905" has status "Ready":"True"
	I1212 22:04:18.450932   17479 node_ready.go:38] duration metric: took 34.029159082s waiting for node "addons-818905" to be "Ready" ...
	I1212 22:04:18.450943   17479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:04:18.457994   17479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:18.640662   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.641704   17479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:04:18.641722   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.642089   17479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:04:18.642103   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.919324   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.137217   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.142669   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.143761   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.375025   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.636541   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.639089   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.639545   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.875608   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.136554   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.139240   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.139428   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.375924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.520166   17479 pod_ready.go:92] pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.520188   17479 pod_ready.go:81] duration metric: took 2.062171329s waiting for pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.520209   17479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.525071   17479 pod_ready.go:92] pod "etcd-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.525088   17479 pod_ready.go:81] duration metric: took 4.874289ms waiting for pod "etcd-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.525099   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.529946   17479 pod_ready.go:92] pod "kube-apiserver-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.529968   17479 pod_ready.go:81] duration metric: took 4.863853ms waiting for pod "kube-apiserver-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.529980   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.534738   17479 pod_ready.go:92] pod "kube-controller-manager-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.534761   17479 pod_ready.go:81] duration metric: took 4.770738ms waiting for pod "kube-controller-manager-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.534775   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bl7tf" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.638066   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.638642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.639642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.851901   17479 pod_ready.go:92] pod "kube-proxy-bl7tf" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.851925   17479 pod_ready.go:81] duration metric: took 317.142701ms waiting for pod "kube-proxy-bl7tf" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.851934   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.875478   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.137259   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.138494   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.139498   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.251459   17479 pod_ready.go:92] pod "kube-scheduler-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:21.251484   17479 pod_ready.go:81] duration metric: took 399.543881ms waiting for pod "kube-scheduler-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:21.251496   17479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:21.374841   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.636871   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.639304   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.639596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.874995   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.141881   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.142438   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.143641   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:22.417399   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.637537   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.639817   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.640734   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:22.876158   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.136617   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.139729   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.140634   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.376109   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.557485   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:23.637145   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.639512   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.641074   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.875358   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.136471   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.138906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.139671   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.375953   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.639501   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.640247   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.641445   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.875720   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.137774   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.139190   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.141672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.420212   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.620475   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:25.639462   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.640278   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.644526   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.917297   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.137296   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.139804   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.139906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.376056   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.637169   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.639268   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.639800   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.876422   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.137416   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.138395   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.139529   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.375830   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.637148   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.638458   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.639619   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.875990   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.058269   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:28.217242   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.218003   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.220403   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.376094   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.637127   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.639730   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.640129   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.875824   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.137243   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.139676   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.140434   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:29.375981   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.638032   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.639030   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.640261   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:29.876000   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.058937   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:30.137917   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.139230   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.140201   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.375221   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.637463   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.638973   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.640498   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.875597   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.136725   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.139421   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.140316   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.376759   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.557990   17479 pod_ready.go:92] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:31.558013   17479 pod_ready.go:81] duration metric: took 10.306510173s waiting for pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:31.558025   17479 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:31.637333   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.639588   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.640155   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.875233   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.137310   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.138510   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.139840   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.417672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.644528   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.644748   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.645540   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.917839   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.139144   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.222876   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.239140   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.421033   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.621375   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:33.637305   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.639668   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.640592   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.876174   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.137092   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.138667   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.140491   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.375046   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.637509   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.638838   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.640270   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.875472   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.137709   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.139030   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.140160   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.376114   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.638561   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.639414   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.640906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.875903   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.072270   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:36.136771   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.139383   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.139829   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.375879   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.637574   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.638915   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.640057   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.875626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.136402   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.139923   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.141626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:37.375286   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.637700   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.638804   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:37.640548   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.875836   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.137223   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.141920   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.142750   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.419249   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.622062   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:38.637376   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.643408   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.644033   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.919609   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.136968   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.140658   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.141414   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.417652   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.637119   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.640466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.640500   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.875672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.137640   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.142644   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.143444   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:40.375858   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.637834   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.640036   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.640791   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:40.917132   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.118216   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:41.136205   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.139778   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.140483   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.375670   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.637437   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.639418   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.640274   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.875634   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.137355   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.139388   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.140341   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.375761   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.637556   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.639524   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.639947   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.875262   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.136966   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.138426   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.139662   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.376508   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.573369   17479 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:43.573399   17479 pod_ready.go:81] duration metric: took 12.01536683s waiting for pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:43.573423   17479 pod_ready.go:38] duration metric: took 25.122466108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:04:43.573441   17479 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:04:43.573501   17479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:04:43.628809   17479 api_server.go:72] duration metric: took 59.319505035s to wait for apiserver process to appear ...
	I1212 22:04:43.628835   17479 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:04:43.628859   17479 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 22:04:43.633910   17479 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 22:04:43.635293   17479 api_server.go:141] control plane version: v1.28.4
	I1212 22:04:43.635320   17479 api_server.go:131] duration metric: took 6.477326ms to wait for apiserver health ...
	I1212 22:04:43.635330   17479 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:04:43.637237   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.639595   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.640743   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.646002   17479 system_pods.go:59] 19 kube-system pods found
	I1212 22:04:43.646028   17479 system_pods.go:61] "coredns-5dd5756b68-h4rvx" [99f8f7ef-0255-46a3-801b-21f77c515e1d] Running
	I1212 22:04:43.646039   17479 system_pods.go:61] "csi-hostpath-attacher-0" [f7ee8827-d6c3-4986-b7bb-c26ab9650a7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:04:43.646048   17479 system_pods.go:61] "csi-hostpath-resizer-0" [1b13b2d7-3881-43ca-9e47-ddc885f26185] Running
	I1212 22:04:43.646061   17479 system_pods.go:61] "csi-hostpathplugin-lstwr" [2d937fcd-bb06-4668-9bfe-27c070954c6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:43.646075   17479 system_pods.go:61] "etcd-addons-818905" [7e1ccaa3-a0d1-4c82-abd8-ea89ae6385fe] Running
	I1212 22:04:43.646081   17479 system_pods.go:61] "kindnet-m2vln" [7dcc107f-df44-4d34-973b-16843b605e9d] Running
	I1212 22:04:43.646085   17479 system_pods.go:61] "kube-apiserver-addons-818905" [b22f7406-f5fb-48ca-b61a-8d06485c07b6] Running
	I1212 22:04:43.646089   17479 system_pods.go:61] "kube-controller-manager-addons-818905" [4732b038-170a-4fab-a6ef-6a7ce76a8c88] Running
	I1212 22:04:43.646097   17479 system_pods.go:61] "kube-ingress-dns-minikube" [bbda5a3b-e96f-420f-9b8f-95922e769a8d] Running
	I1212 22:04:43.646102   17479 system_pods.go:61] "kube-proxy-bl7tf" [7627c43a-311d-4224-a91e-279a1531c679] Running
	I1212 22:04:43.646107   17479 system_pods.go:61] "kube-scheduler-addons-818905" [31aa08dc-c698-4da7-a503-dbbfed58f4f3] Running
	I1212 22:04:43.646111   17479 system_pods.go:61] "metrics-server-7c66d45ddc-xt6xh" [37df71e4-7ba7-496c-b885-921e393df60e] Running
	I1212 22:04:43.646117   17479 system_pods.go:61] "nvidia-device-plugin-daemonset-jc5wh" [061520bc-edd5-47af-9f5a-ba1bfb03e15e] Running
	I1212 22:04:43.646121   17479 system_pods.go:61] "registry-5g6k8" [fc7ebc27-babc-48dc-928d-1b1782ea01ea] Running
	I1212 22:04:43.646126   17479 system_pods.go:61] "registry-proxy-9wc4f" [957915c2-6516-426f-b900-6143af5f0982] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:43.646135   17479 system_pods.go:61] "snapshot-controller-58dbcc7b99-csqbk" [bf6ed58e-0666-4b13-8e54-17ae93b960ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:43.646140   17479 system_pods.go:61] "snapshot-controller-58dbcc7b99-td5gh" [7def081e-409c-4e81-9f5c-267d406f0319] Running
	I1212 22:04:43.646147   17479 system_pods.go:61] "storage-provisioner" [dde8762e-2658-4b5a-b9fb-284579c6615b] Running
	I1212 22:04:43.646151   17479 system_pods.go:61] "tiller-deploy-7b677967b9-8vj4p" [1ba59c52-d351-4cfa-8c97-733b952603c2] Running
	I1212 22:04:43.646158   17479 system_pods.go:74] duration metric: took 10.822488ms to wait for pod list to return data ...
	I1212 22:04:43.646164   17479 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:04:43.647953   17479 default_sa.go:45] found service account: "default"
	I1212 22:04:43.647968   17479 default_sa.go:55] duration metric: took 1.796604ms for default service account to be created ...
	I1212 22:04:43.647974   17479 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:04:43.658927   17479 system_pods.go:86] 19 kube-system pods found
	I1212 22:04:43.658949   17479 system_pods.go:89] "coredns-5dd5756b68-h4rvx" [99f8f7ef-0255-46a3-801b-21f77c515e1d] Running
	I1212 22:04:43.658961   17479 system_pods.go:89] "csi-hostpath-attacher-0" [f7ee8827-d6c3-4986-b7bb-c26ab9650a7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:04:43.658968   17479 system_pods.go:89] "csi-hostpath-resizer-0" [1b13b2d7-3881-43ca-9e47-ddc885f26185] Running
	I1212 22:04:43.658982   17479 system_pods.go:89] "csi-hostpathplugin-lstwr" [2d937fcd-bb06-4668-9bfe-27c070954c6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:43.658994   17479 system_pods.go:89] "etcd-addons-818905" [7e1ccaa3-a0d1-4c82-abd8-ea89ae6385fe] Running
	I1212 22:04:43.659004   17479 system_pods.go:89] "kindnet-m2vln" [7dcc107f-df44-4d34-973b-16843b605e9d] Running
	I1212 22:04:43.659015   17479 system_pods.go:89] "kube-apiserver-addons-818905" [b22f7406-f5fb-48ca-b61a-8d06485c07b6] Running
	I1212 22:04:43.659026   17479 system_pods.go:89] "kube-controller-manager-addons-818905" [4732b038-170a-4fab-a6ef-6a7ce76a8c88] Running
	I1212 22:04:43.659036   17479 system_pods.go:89] "kube-ingress-dns-minikube" [bbda5a3b-e96f-420f-9b8f-95922e769a8d] Running
	I1212 22:04:43.659046   17479 system_pods.go:89] "kube-proxy-bl7tf" [7627c43a-311d-4224-a91e-279a1531c679] Running
	I1212 22:04:43.659055   17479 system_pods.go:89] "kube-scheduler-addons-818905" [31aa08dc-c698-4da7-a503-dbbfed58f4f3] Running
	I1212 22:04:43.659065   17479 system_pods.go:89] "metrics-server-7c66d45ddc-xt6xh" [37df71e4-7ba7-496c-b885-921e393df60e] Running
	I1212 22:04:43.659074   17479 system_pods.go:89] "nvidia-device-plugin-daemonset-jc5wh" [061520bc-edd5-47af-9f5a-ba1bfb03e15e] Running
	I1212 22:04:43.659083   17479 system_pods.go:89] "registry-5g6k8" [fc7ebc27-babc-48dc-928d-1b1782ea01ea] Running
	I1212 22:04:43.659093   17479 system_pods.go:89] "registry-proxy-9wc4f" [957915c2-6516-426f-b900-6143af5f0982] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:43.659107   17479 system_pods.go:89] "snapshot-controller-58dbcc7b99-csqbk" [bf6ed58e-0666-4b13-8e54-17ae93b960ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:43.659118   17479 system_pods.go:89] "snapshot-controller-58dbcc7b99-td5gh" [7def081e-409c-4e81-9f5c-267d406f0319] Running
	I1212 22:04:43.659129   17479 system_pods.go:89] "storage-provisioner" [dde8762e-2658-4b5a-b9fb-284579c6615b] Running
	I1212 22:04:43.659138   17479 system_pods.go:89] "tiller-deploy-7b677967b9-8vj4p" [1ba59c52-d351-4cfa-8c97-733b952603c2] Running
	I1212 22:04:43.659149   17479 system_pods.go:126] duration metric: took 11.169202ms to wait for k8s-apps to be running ...
	I1212 22:04:43.659161   17479 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:04:43.659208   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:04:43.721826   17479 system_svc.go:56] duration metric: took 62.644305ms WaitForService to wait for kubelet.
	I1212 22:04:43.721862   17479 kubeadm.go:581] duration metric: took 59.41257317s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:04:43.721889   17479 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:04:43.724980   17479 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:04:43.725011   17479 node_conditions.go:123] node cpu capacity is 8
	I1212 22:04:43.725024   17479 node_conditions.go:105] duration metric: took 3.12891ms to run NodePressure ...
	I1212 22:04:43.725038   17479 start.go:228] waiting for startup goroutines ...
	I1212 22:04:43.876377   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.137458   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.139055   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.140776   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.375959   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.637059   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.639509   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.641841   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.876054   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.137334   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.139662   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.140549   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.375821   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.638224   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.638655   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.641153   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.875811   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.138015   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.139083   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.140345   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.375344   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.636846   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.638206   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.639688   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.876486   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.137482   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.138442   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:47.140285   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.375133   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.636371   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.638962   17479 kapi.go:107] duration metric: took 57.014360615s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 22:04:47.639506   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.875466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.137628   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.140267   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.375561   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.637624   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.641072   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.875512   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.137622   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.140073   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.375251   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.636886   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.639869   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.875740   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.137521   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.140757   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.375779   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.637191   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.640064   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.875795   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.137279   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.140411   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.375205   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.636912   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.639463   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.875832   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.137560   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.140418   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.375617   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.637944   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.640117   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.876058   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.136794   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.140562   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.375082   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.636476   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.639065   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.927582   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:54.137446   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.142624   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:54.421464   17479 kapi.go:107] duration metric: took 1m2.056021595s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 22:04:54.423868   17479 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-818905 cluster.
	I1212 22:04:54.425712   17479 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 22:04:54.427896   17479 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 22:04:54.637221   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.640883   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.139612   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.143128   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.637825   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.640787   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.136571   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.140139   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.636915   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.639773   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.137091   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.139803   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.638159   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.640544   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.137083   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.139825   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.636833   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.639946   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.136879   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:59.139575   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.680855   17479 kapi.go:107] duration metric: took 1m9.059568635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 22:04:59.681171   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.141052   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.640584   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.140902   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.640120   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.141012   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.640095   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.140557   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.640476   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.139667   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.640049   17479 kapi.go:107] duration metric: took 1m13.013782907s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 22:05:04.641983   17479 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, helm-tiller, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1212 22:05:04.643476   17479 addons.go:502] enable addons completed in 1m20.401697082s: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner metrics-server helm-tiller storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1212 22:05:04.643517   17479 start.go:233] waiting for cluster config update ...
	I1212 22:05:04.643533   17479 start.go:242] writing updated cluster config ...
	I1212 22:05:04.643789   17479 ssh_runner.go:195] Run: rm -f paused
	I1212 22:05:04.691005   17479 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:05:04.692931   17479 out.go:177] * Done! kubectl is now configured to use "addons-818905" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.642437874Z" level=info msg="Stopping pod sandbox: b7774c539a21a51e402f3491ecb6e100fca3582a86b02ac860e24473a6a9eaf6" id=40e6af86-719f-4372-a1bf-eb0b0ec8948b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.643039527Z" level=info msg="Stopped pod sandbox: b7774c539a21a51e402f3491ecb6e100fca3582a86b02ac860e24473a6a9eaf6" id=40e6af86-719f-4372-a1bf-eb0b0ec8948b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.643136840Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=085b1218-add1-4b32-96f7-602d9ca780e8 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.643889391Z" level=info msg="Creating container: default/hello-world-app-5d77478584-c4cfn/hello-world-app" id=c35d727e-c0cb-4f96-b4e1-61f76415ea4c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.643985159Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.694693357Z" level=info msg="Removing container: ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97" id=78fba202-7dde-4ab3-b13b-1a2042d7608a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.707279479Z" level=info msg="Removed container ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=78fba202-7dde-4ab3-b13b-1a2042d7608a name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.720015168Z" level=info msg="Created container c3dd7dd1dcc742f7af258c49ed4dd2c39fd17a5a0dfb92ed4c66e17765d1d446: default/hello-world-app-5d77478584-c4cfn/hello-world-app" id=c35d727e-c0cb-4f96-b4e1-61f76415ea4c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.720561924Z" level=info msg="Starting container: c3dd7dd1dcc742f7af258c49ed4dd2c39fd17a5a0dfb92ed4c66e17765d1d446" id=a84521be-80cf-4391-b962-ad37107f867b name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 22:07:35 addons-818905 crio[951]: time="2023-12-12 22:07:35.728415890Z" level=info msg="Started container" PID=9939 containerID=c3dd7dd1dcc742f7af258c49ed4dd2c39fd17a5a0dfb92ed4c66e17765d1d446 description=default/hello-world-app-5d77478584-c4cfn/hello-world-app id=a84521be-80cf-4391-b962-ad37107f867b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6028a5a3cbc970e4cb558b35df9d838fcd1f73764e26a695a6fc0e4924a9ea52
	Dec 12 22:07:37 addons-818905 crio[951]: time="2023-12-12 22:07:37.529914984Z" level=info msg="Stopping container: a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080 (timeout: 2s)" id=404c4789-e96d-4e79-b760-076fc51351ce name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.535608134Z" level=warning msg="Stopping container a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=404c4789-e96d-4e79-b760-076fc51351ce name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 22:07:39 addons-818905 conmon[5600]: conmon a80511f2cc3ed6fc6ee3 <ninfo>: container 5612 exited with status 137
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.666005844Z" level=info msg="Stopped container a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080: ingress-nginx/ingress-nginx-controller-7c6974c4d8-wchx7/controller" id=404c4789-e96d-4e79-b760-076fc51351ce name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.666553603Z" level=info msg="Stopping pod sandbox: 9511d010779627941a8bf78cc99a0a0017f716fdb118090eb6caccbbdcf8ed4d" id=95a3c84f-5a7b-4863-b1e8-a09e80be2094 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.669216230Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YZSCS4WDKKGZO7QN - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-Q4PWHY5OEFHRUBNV - [0:0]\n-X KUBE-HP-YZSCS4WDKKGZO7QN\n-X KUBE-HP-Q4PWHY5OEFHRUBNV\nCOMMIT\n"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.670443766Z" level=info msg="Closing host port tcp:80"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.670474979Z" level=info msg="Closing host port tcp:443"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.671688668Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.671709220Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.671863335Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-wchx7 Namespace:ingress-nginx ID:9511d010779627941a8bf78cc99a0a0017f716fdb118090eb6caccbbdcf8ed4d UID:31a710c2-33fa-4729-a129-efb759861c19 NetNS:/var/run/netns/d10b3ae9-1c31-490a-8d4c-5c6c9d36671e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.672025892Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-wchx7 from CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.696786074Z" level=info msg="Stopped pod sandbox: 9511d010779627941a8bf78cc99a0a0017f716fdb118090eb6caccbbdcf8ed4d" id=95a3c84f-5a7b-4863-b1e8-a09e80be2094 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.704746636Z" level=info msg="Removing container: a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080" id=dde8fbaa-f06c-4b56-b0e5-1e6eaca8b939 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:07:39 addons-818905 crio[951]: time="2023-12-12 22:07:39.717165320Z" level=info msg="Removed container a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080: ingress-nginx/ingress-nginx-controller-7c6974c4d8-wchx7/controller" id=dde8fbaa-f06c-4b56-b0e5-1e6eaca8b939 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c3dd7dd1dcc74       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   6028a5a3cbc97       hello-world-app-5d77478584-c4cfn
	fc82c5c6bc667       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   84c21280efaa0       nginx
	b1ee5925a9708       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   fdc14f8613b34       gcp-auth-d4c87556c-bs24v
	18966951754e6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   2 minutes ago       Exited              patch                     0                   a85ca2c1a0495       ingress-nginx-admission-patch-vwf7q
	3826e4fb13cb6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   2 minutes ago       Exited              create                    0                   8e36add1cbdd5       ingress-nginx-admission-create-d89bq
	52293c0b2f439       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   58a6669777cda       storage-provisioner
	ed348533ef672       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   68ea693663b33       coredns-5dd5756b68-h4rvx
	2ef41bd4e9451       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             3 minutes ago       Running             kindnet-cni               0                   0ec88a16ccdd2       kindnet-m2vln
	81c775386ee4e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             3 minutes ago       Running             kube-proxy                0                   868c55f7c3d1b       kube-proxy-bl7tf
	0b87a62984772       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   993c5a1d8bc2c       kube-scheduler-addons-818905
	1f13b146792de       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   ab505411af167       kube-controller-manager-addons-818905
	dda8cb4e3c38a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   4be224fec035c       kube-apiserver-addons-818905
	1e20b73cc258a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   6487d343c9a2a       etcd-addons-818905
	
	* 
	* ==> coredns [ed348533ef6724d77f5ca2aa99bab9b642233855afff876503ba91cad2533060] <==
	* [INFO] 10.244.0.10:56436 - 55134 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060718s
	[INFO] 10.244.0.10:57912 - 56730 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003870434s
	[INFO] 10.244.0.10:57912 - 63901 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005013972s
	[INFO] 10.244.0.10:33431 - 40115 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005555908s
	[INFO] 10.244.0.10:33431 - 36022 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007348376s
	[INFO] 10.244.0.10:53568 - 55135 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003727393s
	[INFO] 10.244.0.10:53568 - 33362 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006365233s
	[INFO] 10.244.0.10:45949 - 28566 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061605s
	[INFO] 10.244.0.10:45949 - 29587 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124965s
	[INFO] 10.244.0.20:44603 - 16220 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151903s
	[INFO] 10.244.0.20:39184 - 23548 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207324s
	[INFO] 10.244.0.20:57548 - 56809 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116463s
	[INFO] 10.244.0.20:45647 - 25628 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180314s
	[INFO] 10.244.0.20:33214 - 47482 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008297s
	[INFO] 10.244.0.20:45668 - 28773 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121389s
	[INFO] 10.244.0.20:32789 - 41279 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004616765s
	[INFO] 10.244.0.20:46201 - 31006 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006725551s
	[INFO] 10.244.0.20:49993 - 49805 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006223193s
	[INFO] 10.244.0.20:47658 - 35516 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006279737s
	[INFO] 10.244.0.20:38759 - 31081 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006203527s
	[INFO] 10.244.0.20:56248 - 31689 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832427s
	[INFO] 10.244.0.20:55484 - 14845 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000634302s
	[INFO] 10.244.0.20:40729 - 29019 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000779895s
	[INFO] 10.244.0.25:59269 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022415s
	[INFO] 10.244.0.25:56475 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123932s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-818905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-818905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=addons-818905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_03_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-818905
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-818905
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:07:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:06:05 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:06:05 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:06:05 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:06:05 +0000   Tue, 12 Dec 2023 22:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-818905
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec60f86b068a4c23b171320d45849218
	  System UUID:                e08a4ec2-a8bc-4866-875b-4a1a708b9c93
	  Boot ID:                    e32ab69d-45ad-4e0a-b786-ce498c8395cb
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-c4cfn         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-bs24v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 coredns-5dd5756b68-h4rvx                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m59s
	  kube-system                 etcd-addons-818905                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m13s
	  kube-system                 kindnet-m2vln                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m
	  kube-system                 kube-apiserver-addons-818905             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-controller-manager-addons-818905    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-proxy-bl7tf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-scheduler-addons-818905             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m55s  kube-proxy       
	  Normal  Starting                 4m13s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s  kubelet          Node addons-818905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s  kubelet          Node addons-818905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s  kubelet          Node addons-818905 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m1s   node-controller  Node addons-818905 event: Registered Node addons-818905 in Controller
	  Normal  NodeReady                3m26s  kubelet          Node addons-818905 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007605] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003007] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000623] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000623] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000674] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +10.028692] kauditd_printk_skb: 36 callbacks suppressed
	[Dec12 22:05] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[  +1.012169] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[  +2.015794] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[  +4.191562] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[  +8.191227] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[ +16.126267] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	[Dec12 22:06] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 e1 67 61 24 6f 86 f3 94 73 fa 39 08 00
	
	* 
	* ==> etcd [1e20b73cc258aff4698a8b9e41ceb38f17ab5d0081fe7bf33bc7e4ccfaae1d58] <==
	* {"level":"info","ts":"2023-12-12T22:03:47.7333Z","caller":"traceutil/trace.go:171","msg":"trace[1974505543] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"296.7175ms","start":"2023-12-12T22:03:47.436575Z","end":"2023-12-12T22:03:47.733293Z","steps":["trace[1974505543] 'process raft request'  (duration: 296.025783ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:47.733368Z","caller":"traceutil/trace.go:171","msg":"trace[1907912190] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"213.916451ms","start":"2023-12-12T22:03:47.519444Z","end":"2023-12-12T22:03:47.733361Z","steps":["trace[1907912190] 'process raft request'  (duration: 213.21302ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:47.733489Z","caller":"traceutil/trace.go:171","msg":"trace[164526594] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"211.683311ms","start":"2023-12-12T22:03:47.521799Z","end":"2023-12-12T22:03:47.733483Z","steps":["trace[164526594] 'process raft request'  (duration: 210.886347ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:03:47.734259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.294136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:03:47.735049Z","caller":"traceutil/trace.go:171","msg":"trace[175603115] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:380; }","duration":"114.082128ms","start":"2023-12-12T22:03:47.620958Z","end":"2023-12-12T22:03:47.73504Z","steps":["trace[175603115] 'agreement among raft nodes before linearized reading'  (duration: 113.280574ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:03:47.734341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.187669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:03:47.736382Z","caller":"traceutil/trace.go:171","msg":"trace[1780352886] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:380; }","duration":"204.222619ms","start":"2023-12-12T22:03:47.532148Z","end":"2023-12-12T22:03:47.736371Z","steps":["trace[1780352886] 'agreement among raft nodes before linearized reading'  (duration: 202.177765ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:48.529606Z","caller":"traceutil/trace.go:171","msg":"trace[1062574331] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"104.410935ms","start":"2023-12-12T22:03:48.425178Z","end":"2023-12-12T22:03:48.529589Z","steps":["trace[1062574331] 'process raft request'  (duration: 104.295112ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:28.214482Z","caller":"traceutil/trace.go:171","msg":"trace[901079705] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:961; }","duration":"133.32295ms","start":"2023-12-12T22:04:28.081138Z","end":"2023-12-12T22:04:28.214461Z","steps":["trace[901079705] 'read index received'  (duration: 69.579218ms)","trace[901079705] 'applied index is now lower than readState.Index'  (duration: 63.743099ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:04:28.214571Z","caller":"traceutil/trace.go:171","msg":"trace[849702681] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"142.913176ms","start":"2023-12-12T22:04:28.071632Z","end":"2023-12-12T22:04:28.214546Z","steps":["trace[849702681] 'process raft request'  (duration: 79.128034ms)","trace[849702681] 'compare'  (duration: 63.60257ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:04:28.214639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.497458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5736"}
	{"level":"info","ts":"2023-12-12T22:04:28.214667Z","caller":"traceutil/trace.go:171","msg":"trace[2136373322] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:937; }","duration":"133.542877ms","start":"2023-12-12T22:04:28.081116Z","end":"2023-12-12T22:04:28.214658Z","steps":["trace[2136373322] 'agreement among raft nodes before linearized reading'  (duration: 133.465607ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466303Z","caller":"traceutil/trace.go:171","msg":"trace[1713887354] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"120.793689ms","start":"2023-12-12T22:04:59.345494Z","end":"2023-12-12T22:04:59.466288Z","steps":["trace[1713887354] 'process raft request'  (duration: 29.33325ms)","trace[1713887354] 'compare'  (duration: 91.054842ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:04:59.466484Z","caller":"traceutil/trace.go:171","msg":"trace[1918326014] transaction","detail":"{read_only:false; response_revision:1114; number_of_response:1; }","duration":"120.047992ms","start":"2023-12-12T22:04:59.346427Z","end":"2023-12-12T22:04:59.466475Z","steps":["trace[1918326014] 'process raft request'  (duration: 119.544939ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466551Z","caller":"traceutil/trace.go:171","msg":"trace[1204744217] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"119.798932ms","start":"2023-12-12T22:04:59.346747Z","end":"2023-12-12T22:04:59.466546Z","steps":["trace[1204744217] 'process raft request'  (duration: 119.259161ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466605Z","caller":"traceutil/trace.go:171","msg":"trace[483315355] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"119.685654ms","start":"2023-12-12T22:04:59.346914Z","end":"2023-12-12T22:04:59.4666Z","steps":["trace[483315355] 'process raft request'  (duration: 119.112089ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.677923Z","caller":"traceutil/trace.go:171","msg":"trace[173774252] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"144.00529ms","start":"2023-12-12T22:04:59.533894Z","end":"2023-12-12T22:04:59.677899Z","steps":["trace[173774252] 'process raft request'  (duration: 58.190446ms)","trace[173774252] 'compare'  (duration: 85.597841ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:05:16.840896Z","caller":"traceutil/trace.go:171","msg":"trace[110425836] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"120.48092ms","start":"2023-12-12T22:05:16.720394Z","end":"2023-12-12T22:05:16.840875Z","steps":["trace[110425836] 'process raft request'  (duration: 120.311733ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:28.36783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.648145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/local-path-storage/\" range_end:\"/registry/replicasets/local-path-storage0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:05:28.367894Z","caller":"traceutil/trace.go:171","msg":"trace[2118634835] range","detail":"{range_begin:/registry/replicasets/local-path-storage/; range_end:/registry/replicasets/local-path-storage0; response_count:0; response_revision:1408; }","duration":"123.731226ms","start":"2023-12-12T22:05:28.244146Z","end":"2023-12-12T22:05:28.367877Z","steps":["trace[2118634835] 'range keys from in-memory index tree'  (duration: 123.604585ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:28.533437Z","caller":"traceutil/trace.go:171","msg":"trace[1986037333] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1459; }","duration":"126.448642ms","start":"2023-12-12T22:05:28.406969Z","end":"2023-12-12T22:05:28.533418Z","steps":["trace[1986037333] 'read index received'  (duration: 126.268662ms)","trace[1986037333] 'applied index is now lower than readState.Index'  (duration: 179.577µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:05:28.533482Z","caller":"traceutil/trace.go:171","msg":"trace[1718408694] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"142.72043ms","start":"2023-12-12T22:05:28.390744Z","end":"2023-12-12T22:05:28.533464Z","steps":["trace[1718408694] 'process raft request'  (duration: 142.562124ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:05:28.533542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.573334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:05:28.53359Z","caller":"traceutil/trace.go:171","msg":"trace[101537703] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"126.631925ms","start":"2023-12-12T22:05:28.406947Z","end":"2023-12-12T22:05:28.533578Z","steps":["trace[101537703] 'agreement among raft nodes before linearized reading'  (duration: 126.541642ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:05:28.650841Z","caller":"traceutil/trace.go:171","msg":"trace[1709282490] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"111.521264ms","start":"2023-12-12T22:05:28.539301Z","end":"2023-12-12T22:05:28.650822Z","steps":["trace[1709282490] 'process raft request'  (duration: 111.423637ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b1ee5925a9708fdabe6245a489897046f22f720a6e22d75ae27de76f71464a64] <==
	* 2023/12/12 22:04:53 GCP Auth Webhook started!
	2023/12/12 22:05:05 Ready to marshal response ...
	2023/12/12 22:05:05 Ready to write response ...
	2023/12/12 22:05:05 Ready to marshal response ...
	2023/12/12 22:05:05 Ready to write response ...
	2023/12/12 22:05:09 Ready to marshal response ...
	2023/12/12 22:05:09 Ready to write response ...
	2023/12/12 22:05:10 Ready to marshal response ...
	2023/12/12 22:05:10 Ready to write response ...
	2023/12/12 22:05:14 Ready to marshal response ...
	2023/12/12 22:05:14 Ready to write response ...
	2023/12/12 22:05:17 Ready to marshal response ...
	2023/12/12 22:05:17 Ready to write response ...
	2023/12/12 22:05:35 Ready to marshal response ...
	2023/12/12 22:05:35 Ready to write response ...
	2023/12/12 22:05:55 Ready to marshal response ...
	2023/12/12 22:05:55 Ready to write response ...
	2023/12/12 22:07:34 Ready to marshal response ...
	2023/12/12 22:07:34 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:07:44 up 50 min,  0 users,  load average: 0.48, 0.95, 0.52
	Linux addons-818905 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2ef41bd4e945194bda0c923e875f5cc6e9ad3681034340b43547a78a7b19508b] <==
	* I1212 22:05:38.418436       1 main.go:227] handling current node
	I1212 22:05:48.426404       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:05:48.426434       1 main.go:227] handling current node
	I1212 22:05:58.429552       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:05:58.429575       1 main.go:227] handling current node
	I1212 22:06:08.439816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:08.439836       1 main.go:227] handling current node
	I1212 22:06:18.443387       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:18.443410       1 main.go:227] handling current node
	I1212 22:06:28.446494       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:28.446518       1 main.go:227] handling current node
	I1212 22:06:38.451362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:38.451388       1 main.go:227] handling current node
	I1212 22:06:48.463608       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:48.463631       1 main.go:227] handling current node
	I1212 22:06:58.466666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:06:58.466689       1 main.go:227] handling current node
	I1212 22:07:08.471572       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:07:08.471602       1 main.go:227] handling current node
	I1212 22:07:18.475799       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:07:18.475823       1 main.go:227] handling current node
	I1212 22:07:28.487939       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:07:28.487961       1 main.go:227] handling current node
	I1212 22:07:38.491995       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:07:38.492015       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [dda8cb4e3c38a9ebece1cf15ee0faa7a8c2c02673b2f6118723d39e15e6a8327] <==
	* I1212 22:05:25.053869       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1212 22:05:26.064434       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1212 22:05:32.112841       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1212 22:05:33.383387       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1212 22:05:47.160475       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 22:06:11.043332       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.043376       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.049462       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.049518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.058051       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.058193       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.064254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.064459       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.128746       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.128995       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.131766       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.131814       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.138126       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.138172       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 22:06:11.144901       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 22:06:11.144942       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 22:06:12.065304       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 22:06:12.145367       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1212 22:06:12.148126       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 22:07:34.760873       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.172.64"}
	
	* 
	* ==> kube-controller-manager [1f13b146792deabef40ac99f7f50af053a8c4c7cecbac97273b118a0abd78e17] <==
	* W1212 22:06:47.536208       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:06:47.536236       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:06:47.688159       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:06:47.688191       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:06:48.543969       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:06:48.543996       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:23.665010       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:23.665039       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:25.091943       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:25.091970       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1212 22:07:33.020942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:33.020972       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1212 22:07:34.599080       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1212 22:07:34.613818       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-c4cfn"
	I1212 22:07:34.619688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.789656ms"
	I1212 22:07:34.623899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.101774ms"
	I1212 22:07:34.624022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="86.077µs"
	I1212 22:07:34.629899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.457µs"
	I1212 22:07:36.517530       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1212 22:07:36.519054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="9.449µs"
	I1212 22:07:36.521557       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:07:36.711765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.518277ms"
	I1212 22:07:36.711850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.916µs"
	W1212 22:07:36.897070       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1212 22:07:36.897097       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [81c775386ee4e49e512b6a390226f967bd48c00a46b35d0a4bfd4550f6ed4ce5] <==
	* I1212 22:03:48.422048       1 server_others.go:69] "Using iptables proxy"
	I1212 22:03:48.625429       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 22:03:49.024672       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 22:03:49.038681       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:03:49.038786       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 22:03:49.038821       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 22:03:49.038882       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:03:49.039140       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:03:49.039368       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:03:49.117871       1 config.go:188] "Starting service config controller"
	I1212 22:03:49.118905       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:03:49.118314       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:03:49.118996       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:03:49.118674       1 config.go:315] "Starting node config controller"
	I1212 22:03:49.119026       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:03:49.220409       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:03:49.220569       1 shared_informer.go:318] Caches are synced for node config
	I1212 22:03:49.220618       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0b87a62984772ddfcc3c284095b2e409611917f578db34c8a1500faf022b12f2] <==
	* E1212 22:03:28.923322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:03:28.923328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:28.923458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:28.923484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:03:28.923490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:28.923501       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:03:28.923509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:03:28.923516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:03:28.923460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:28.923565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:28.923631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:03:28.923700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:03:28.923638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:03:28.927275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:03:29.829725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:03:29.829756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:29.854028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:03:29.854068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:03:29.879308       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:03:29.879340       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 22:03:29.879452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:03:29.879475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:03:29.886724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:03:29.886749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1212 22:03:32.020013       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 22:07:34 addons-818905 kubelet[1550]: I1212 22:07:34.620934    1550 memory_manager.go:346] "RemoveStaleState removing state" podUID="2d937fcd-bb06-4668-9bfe-27c070954c6a" containerName="liveness-probe"
	Dec 12 22:07:34 addons-818905 kubelet[1550]: I1212 22:07:34.730406    1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qh6\" (UniqueName: \"kubernetes.io/projected/65140394-ed72-483b-8c64-c7e779f85be1-kube-api-access-z8qh6\") pod \"hello-world-app-5d77478584-c4cfn\" (UID: \"65140394-ed72-483b-8c64-c7e779f85be1\") " pod="default/hello-world-app-5d77478584-c4cfn"
	Dec 12 22:07:34 addons-818905 kubelet[1550]: I1212 22:07:34.730470    1550 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/65140394-ed72-483b-8c64-c7e779f85be1-gcp-creds\") pod \"hello-world-app-5d77478584-c4cfn\" (UID: \"65140394-ed72-483b-8c64-c7e779f85be1\") " pod="default/hello-world-app-5d77478584-c4cfn"
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.693719    1550 scope.go:117] "RemoveContainer" containerID="ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97"
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.707569    1550 scope.go:117] "RemoveContainer" containerID="ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97"
	Dec 12 22:07:35 addons-818905 kubelet[1550]: E1212 22:07:35.707928    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97\": container with ID starting with ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97 not found: ID does not exist" containerID="ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97"
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.707965    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97"} err="failed to get container status \"ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97\": rpc error: code = NotFound desc = could not find container \"ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97\": container with ID starting with ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97 not found: ID does not exist"
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.835297    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhr2l\" (UniqueName: \"kubernetes.io/projected/bbda5a3b-e96f-420f-9b8f-95922e769a8d-kube-api-access-nhr2l\") pod \"bbda5a3b-e96f-420f-9b8f-95922e769a8d\" (UID: \"bbda5a3b-e96f-420f-9b8f-95922e769a8d\") "
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.837140    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbda5a3b-e96f-420f-9b8f-95922e769a8d-kube-api-access-nhr2l" (OuterVolumeSpecName: "kube-api-access-nhr2l") pod "bbda5a3b-e96f-420f-9b8f-95922e769a8d" (UID: "bbda5a3b-e96f-420f-9b8f-95922e769a8d"). InnerVolumeSpecName "kube-api-access-nhr2l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:07:35 addons-818905 kubelet[1550]: I1212 22:07:35.935756    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nhr2l\" (UniqueName: \"kubernetes.io/projected/bbda5a3b-e96f-420f-9b8f-95922e769a8d-kube-api-access-nhr2l\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:07:36 addons-818905 kubelet[1550]: I1212 22:07:36.705150    1550 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-c4cfn" podStartSLOduration=2.081780668 podCreationTimestamp="2023-12-12 22:07:34 +0000 UTC" firstStartedPulling="2023-12-12 22:07:35.017816446 +0000 UTC m=+243.431371947" lastFinishedPulling="2023-12-12 22:07:35.641139121 +0000 UTC m=+244.054694613" observedRunningTime="2023-12-12 22:07:36.704975648 +0000 UTC m=+245.118531157" watchObservedRunningTime="2023-12-12 22:07:36.705103334 +0000 UTC m=+245.118658844"
	Dec 12 22:07:37 addons-818905 kubelet[1550]: I1212 22:07:37.726098    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3c0ae378-787e-40dc-86bd-32e7581d0b84" path="/var/lib/kubelet/pods/3c0ae378-787e-40dc-86bd-32e7581d0b84/volumes"
	Dec 12 22:07:37 addons-818905 kubelet[1550]: I1212 22:07:37.726416    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8479c2ac-6f79-49d0-ba82-254a23373277" path="/var/lib/kubelet/pods/8479c2ac-6f79-49d0-ba82-254a23373277/volumes"
	Dec 12 22:07:37 addons-818905 kubelet[1550]: I1212 22:07:37.726687    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bbda5a3b-e96f-420f-9b8f-95922e769a8d" path="/var/lib/kubelet/pods/bbda5a3b-e96f-420f-9b8f-95922e769a8d/volumes"
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.703886    1550 scope.go:117] "RemoveContainer" containerID="a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080"
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.717360    1550 scope.go:117] "RemoveContainer" containerID="a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080"
	Dec 12 22:07:39 addons-818905 kubelet[1550]: E1212 22:07:39.717621    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080\": container with ID starting with a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080 not found: ID does not exist" containerID="a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080"
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.717661    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080"} err="failed to get container status \"a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080\": rpc error: code = NotFound desc = could not find container \"a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080\": container with ID starting with a80511f2cc3ed6fc6ee3fc3e55579be7101a8d15c36d49ca5b75d9eaca6c0080 not found: ID does not exist"
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.856697    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9kxf\" (UniqueName: \"kubernetes.io/projected/31a710c2-33fa-4729-a129-efb759861c19-kube-api-access-l9kxf\") pod \"31a710c2-33fa-4729-a129-efb759861c19\" (UID: \"31a710c2-33fa-4729-a129-efb759861c19\") "
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.856738    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31a710c2-33fa-4729-a129-efb759861c19-webhook-cert\") pod \"31a710c2-33fa-4729-a129-efb759861c19\" (UID: \"31a710c2-33fa-4729-a129-efb759861c19\") "
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.858494    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a710c2-33fa-4729-a129-efb759861c19-kube-api-access-l9kxf" (OuterVolumeSpecName: "kube-api-access-l9kxf") pod "31a710c2-33fa-4729-a129-efb759861c19" (UID: "31a710c2-33fa-4729-a129-efb759861c19"). InnerVolumeSpecName "kube-api-access-l9kxf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.858551    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31a710c2-33fa-4729-a129-efb759861c19-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "31a710c2-33fa-4729-a129-efb759861c19" (UID: "31a710c2-33fa-4729-a129-efb759861c19"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.957762    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l9kxf\" (UniqueName: \"kubernetes.io/projected/31a710c2-33fa-4729-a129-efb759861c19-kube-api-access-l9kxf\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:07:39 addons-818905 kubelet[1550]: I1212 22:07:39.957794    1550 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/31a710c2-33fa-4729-a129-efb759861c19-webhook-cert\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:07:41 addons-818905 kubelet[1550]: I1212 22:07:41.725555    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="31a710c2-33fa-4729-a129-efb759861c19" path="/var/lib/kubelet/pods/31a710c2-33fa-4729-a129-efb759861c19/volumes"
	
	* 
	* ==> storage-provisioner [52293c0b2f4394891addc8c5d0199b176124d008bb75df9c85893953bd9907ca] <==
	* I1212 22:04:19.272211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:04:19.279205       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:04:19.279248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:04:19.284887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:04:19.284999       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa!
	I1212 22:04:19.285029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6ac0969-5fb7-4455-9f60-f7ef31f5ec2e", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa became leader
	I1212 22:04:19.385812       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-818905 -n addons-818905
helpers_test.go:261: (dbg) Run:  kubectl --context addons-818905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-818905 --alsologtostderr -v=1
addons_test.go:823: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-818905 --alsologtostderr -v=1: exit status 11 (279.419863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:05:20.207667   27432 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:05:20.207833   27432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:05:20.207841   27432 out.go:309] Setting ErrFile to fd 2...
	I1212 22:05:20.207846   27432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:05:20.208031   27432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:05:20.208321   27432 mustload.go:65] Loading cluster: addons-818905
	I1212 22:05:20.208666   27432 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:05:20.208688   27432 addons.go:594] checking whether the cluster is paused
	I1212 22:05:20.208768   27432 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:05:20.208778   27432 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:05:20.209130   27432 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:05:20.227870   27432 ssh_runner.go:195] Run: systemctl --version
	I1212 22:05:20.227945   27432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:05:20.246458   27432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:05:20.331518   27432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:05:20.331614   27432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:05:20.370642   27432 cri.go:89] found id: "102ebd2482dc38741daff8ec8d1cfa5cdeb4ea4ba19d943426916372dd15bdfb"
	I1212 22:05:20.370672   27432 cri.go:89] found id: "44306b9ab3c7f71ab07b2a1c64cd73a641c32a0bac26766be26bb22b95833f74"
	I1212 22:05:20.370676   27432 cri.go:89] found id: "8616631f0021f055c944f21c2f6e6341a175a2cd9e0f631f9b7f57c265b4879e"
	I1212 22:05:20.370680   27432 cri.go:89] found id: "630e6f156278bbb912e0e45e4930f535f300245e699d7a45c6b8e98239f26ca9"
	I1212 22:05:20.370683   27432 cri.go:89] found id: "31a7de257dd78a2c6c1bcd50ccdcc30b015e13355adcb30f970309897c0ece4f"
	I1212 22:05:20.370690   27432 cri.go:89] found id: "22d4b7d1a687e5e8a4fa1ff9e168df5368131cff7851238e4ca99cb53e9e0e70"
	I1212 22:05:20.370696   27432 cri.go:89] found id: "1f8c65b8e3be5a730ab9621ed5ece4273bc9399c1e05e695fe4ae93458222d5a"
	I1212 22:05:20.370701   27432 cri.go:89] found id: "2ade154fdf4618df87b5414f58eba3ab0c77f1c6b0682d53bbb4cfcde7c9decc"
	I1212 22:05:20.370707   27432 cri.go:89] found id: "ed8787083a833fb5f169746fd502ab7e9389bce83b293a3970aa3fc958ecec97"
	I1212 22:05:20.370719   27432 cri.go:89] found id: "4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51"
	I1212 22:05:20.370724   27432 cri.go:89] found id: "fe69091ade7e3fb7bff0d5726649ae993b1061b261932e14f7c61a9e4e0de8ed"
	I1212 22:05:20.370730   27432 cri.go:89] found id: "f383a794043d88f97b76b71eadf3bebb52e32f54e4e2a5fc334bd6db4d3b3431"
	I1212 22:05:20.370736   27432 cri.go:89] found id: "4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891"
	I1212 22:05:20.370744   27432 cri.go:89] found id: "52293c0b2f4394891addc8c5d0199b176124d008bb75df9c85893953bd9907ca"
	I1212 22:05:20.370751   27432 cri.go:89] found id: "ed348533ef6724d77f5ca2aa99bab9b642233855afff876503ba91cad2533060"
	I1212 22:05:20.370757   27432 cri.go:89] found id: "2ef41bd4e945194bda0c923e875f5cc6e9ad3681034340b43547a78a7b19508b"
	I1212 22:05:20.370764   27432 cri.go:89] found id: "81c775386ee4e49e512b6a390226f967bd48c00a46b35d0a4bfd4550f6ed4ce5"
	I1212 22:05:20.370770   27432 cri.go:89] found id: "0b87a62984772ddfcc3c284095b2e409611917f578db34c8a1500faf022b12f2"
	I1212 22:05:20.370776   27432 cri.go:89] found id: "1f13b146792deabef40ac99f7f50af053a8c4c7cecbac97273b118a0abd78e17"
	I1212 22:05:20.370779   27432 cri.go:89] found id: "dda8cb4e3c38a9ebece1cf15ee0faa7a8c2c02673b2f6118723d39e15e6a8327"
	I1212 22:05:20.370784   27432 cri.go:89] found id: "1e20b73cc258aff4698a8b9e41ceb38f17ab5d0081fe7bf33bc7e4ccfaae1d58"
	I1212 22:05:20.370787   27432 cri.go:89] found id: ""
	I1212 22:05:20.370836   27432 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 22:05:20.421375   27432 out.go:177] 
	W1212 22:05:20.424436   27432 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-12T22:05:20Z" level=error msg="stat /run/runc/4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-12-12T22:05:20Z" level=error msg="stat /run/runc/4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891: no such file or directory"
	
	W1212 22:05:20.424462   27432 out.go:239] * 
	* 
	W1212 22:05:20.426339   27432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 22:05:20.428032   27432 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:825: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-818905 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-818905
helpers_test.go:235: (dbg) docker inspect addons-818905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a",
	        "Created": "2023-12-12T22:03:15.59381978Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:03:15.900885909Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/hosts",
	        "LogPath": "/var/lib/docker/containers/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a/c3e9db620dfd965234cb9d6d53a2295f5a810df3b4cf9dd4906fed068e8c409a-json.log",
	        "Name": "/addons-818905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-818905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-818905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e-init/diff:/var/lib/docker/overlay2/315943c5fbce6bf5205163f366377908e1fa1e507321eff7fb62256fbf325087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/780ffa35a7ffd1f49706c9a3b3a2c13050e0a9255115435e6b45ee5d09d4c87e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-818905",
	                "Source": "/var/lib/docker/volumes/addons-818905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-818905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-818905",
	                "name.minikube.sigs.k8s.io": "addons-818905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44f8d23852dbe637e6958c0038ab91ba8bc5974c5bb86adccab2e9f517474532",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44f8d23852db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-818905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c3e9db620dfd",
	                        "addons-818905"
	                    ],
	                    "NetworkID": "62cff3dd1908cdc8b0cac8152bda41ab72e4c63f6f1286b803d21d1d3261d680",
	                    "EndpointID": "9c73d266e6ed0c9b8ae901967fd9ba75f632cc202bd6b09770a62eb75973d61a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-818905 -n addons-818905
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-818905 logs -n 25: (1.168729344s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | -p download-only-479271                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | -p download-only-479271                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | -p download-only-479271                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| delete  | -p download-only-479271                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| delete  | -p download-only-479271                                                                     | download-only-479271   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| start   | --download-only -p                                                                          | download-docker-042208 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | download-docker-042208                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-042208                                                                   | download-docker-042208 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-328563   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | binary-mirror-328563                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36103                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-328563                                                                     | binary-mirror-328563   | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:02 UTC |
	| addons  | disable dashboard -p                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |                     |
	|         | addons-818905                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-818905 --wait=true                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC | 12 Dec 23 22:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | -p addons-818905                                                                            |                        |         |         |                     |                     |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-818905 ssh cat                                                                       | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | /opt/local-path-provisioner/pvc-63257cc8-df89-4d4d-9324-970810f80368_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-818905 addons                                                                        | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-818905 ip                                                                            | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	| addons  | addons-818905 addons disable                                                                | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC | 12 Dec 23 22:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-818905          | jenkins | v1.32.0 | 12 Dec 23 22:05 UTC |                     |
	|         | -p addons-818905                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:54.281932   17479 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:54.282094   17479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:54.282104   17479 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:54.282108   17479 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:54.282329   17479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:02:54.282956   17479 out.go:303] Setting JSON to false
	I1212 22:02:54.283781   17479 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2726,"bootTime":1702415848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:54.283839   17479 start.go:138] virtualization: kvm guest
	I1212 22:02:54.286219   17479 out.go:177] * [addons-818905] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:54.287719   17479 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:02:54.289128   17479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:54.287718   17479 notify.go:220] Checking for updates...
	I1212 22:02:54.292015   17479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:02:54.293538   17479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:02:54.295033   17479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:02:54.296426   17479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:02:54.297885   17479 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:02:54.317317   17479 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:02:54.317424   17479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:54.365132   17479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:54.356923234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:54.365243   17479 docker.go:295] overlay module found
	I1212 22:02:54.368022   17479 out.go:177] * Using the docker driver based on user configuration
	I1212 22:02:54.369482   17479 start.go:298] selected driver: docker
	I1212 22:02:54.369500   17479 start.go:902] validating driver "docker" against <nil>
	I1212 22:02:54.369513   17479 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:02:54.370756   17479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:54.421619   17479 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:54.413375363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:54.421756   17479 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:02:54.421967   17479 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:02:54.423913   17479 out.go:177] * Using Docker driver with root privileges
	I1212 22:02:54.425274   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:02:54.425289   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:02:54.425297   17479 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:02:54.425306   17479 start_flags.go:323] config:
	{Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:54.426900   17479 out.go:177] * Starting control plane node addons-818905 in cluster addons-818905
	I1212 22:02:54.428221   17479 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:02:54.429551   17479 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:02:54.430776   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:54.430804   17479 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:54.430810   17479 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:54.430861   17479 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:02:54.430892   17479 preload.go:174] Found /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:02:54.430903   17479 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:02:54.431308   17479 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json ...
	I1212 22:02:54.431333   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json: {Name:mkdb6ed98b8cb72753cbb97152ca6748f00feee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:02:54.445142   17479 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:54.445260   17479 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 22:02:54.445278   17479 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 22:02:54.445282   17479 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 22:02:54.445290   17479 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 22:02:54.445297   17479 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 from local cache
	I1212 22:03:07.208869   17479 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 from cached tarball
	I1212 22:03:07.208919   17479 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:03:07.208957   17479 start.go:365] acquiring machines lock for addons-818905: {Name:mk5192f60678fae1daf6eb7075e7a56b8fe6e5da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:03:07.209057   17479 start.go:369] acquired machines lock for "addons-818905" in 81.749µs
	I1212 22:03:07.209082   17479 start.go:93] Provisioning new machine with config: &{Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:07.209151   17479 start.go:125] createHost starting for "" (driver="docker")
	I1212 22:03:07.290277   17479 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1212 22:03:07.290555   17479 start.go:159] libmachine.API.Create for "addons-818905" (driver="docker")
	I1212 22:03:07.290592   17479 client.go:168] LocalClient.Create starting
	I1212 22:03:07.290748   17479 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem
	I1212 22:03:07.398915   17479 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem
	I1212 22:03:07.659756   17479 cli_runner.go:164] Run: docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 22:03:07.675004   17479 cli_runner.go:211] docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 22:03:07.675082   17479 network_create.go:281] running [docker network inspect addons-818905] to gather additional debugging logs...
	I1212 22:03:07.675109   17479 cli_runner.go:164] Run: docker network inspect addons-818905
	W1212 22:03:07.691431   17479 cli_runner.go:211] docker network inspect addons-818905 returned with exit code 1
	I1212 22:03:07.691466   17479 network_create.go:284] error running [docker network inspect addons-818905]: docker network inspect addons-818905: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-818905 not found
	I1212 22:03:07.691481   17479 network_create.go:286] output of [docker network inspect addons-818905]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-818905 not found
	
	** /stderr **
	I1212 22:03:07.691620   17479 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:03:07.706942   17479 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002866a80}
	I1212 22:03:07.706986   17479 network_create.go:124] attempt to create docker network addons-818905 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 22:03:07.707034   17479 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-818905 addons-818905
	I1212 22:03:07.981158   17479 network_create.go:108] docker network addons-818905 192.168.49.0/24 created
	I1212 22:03:07.981190   17479 kic.go:121] calculated static IP "192.168.49.2" for the "addons-818905" container
	I1212 22:03:07.981247   17479 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 22:03:07.995904   17479 cli_runner.go:164] Run: docker volume create addons-818905 --label name.minikube.sigs.k8s.io=addons-818905 --label created_by.minikube.sigs.k8s.io=true
	I1212 22:03:08.041288   17479 oci.go:103] Successfully created a docker volume addons-818905
	I1212 22:03:08.041380   17479 cli_runner.go:164] Run: docker run --rm --name addons-818905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --entrypoint /usr/bin/test -v addons-818905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 22:03:10.435995   17479 cli_runner.go:217] Completed: docker run --rm --name addons-818905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --entrypoint /usr/bin/test -v addons-818905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib: (2.394563419s)
	I1212 22:03:10.436030   17479 oci.go:107] Successfully prepared a docker volume addons-818905
	I1212 22:03:10.436055   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:10.436073   17479 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 22:03:10.436128   17479 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-818905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 22:03:15.525392   17479 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-818905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (5.089205861s)
	I1212 22:03:15.525424   17479 kic.go:203] duration metric: took 5.089347 seconds to extract preloaded images to volume
	W1212 22:03:15.525560   17479 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 22:03:15.525672   17479 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 22:03:15.579967   17479 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-818905 --name addons-818905 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-818905 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-818905 --network addons-818905 --ip 192.168.49.2 --volume addons-818905:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:03:15.908593   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Running}}
	I1212 22:03:15.925239   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:15.941593   17479 cli_runner.go:164] Run: docker exec addons-818905 stat /var/lib/dpkg/alternatives/iptables
	I1212 22:03:15.992659   17479 oci.go:144] the created container "addons-818905" has a running status.
	I1212 22:03:15.992687   17479 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa...
	I1212 22:03:16.078873   17479 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 22:03:16.097774   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:16.113584   17479 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 22:03:16.113601   17479 kic_runner.go:114] Args: [docker exec --privileged addons-818905 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 22:03:16.173009   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:16.190198   17479 machine.go:88] provisioning docker machine ...
	I1212 22:03:16.190229   17479 ubuntu.go:169] provisioning hostname "addons-818905"
	I1212 22:03:16.190276   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:16.208215   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:16.208554   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:16.208570   17479 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-818905 && echo "addons-818905" | sudo tee /etc/hostname
	I1212 22:03:16.210225   17479 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55468->127.0.0.1:32772: read: connection reset by peer
	I1212 22:03:19.341346   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-818905
	
	I1212 22:03:19.341422   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.357565   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:19.357909   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:19.357926   17479 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-818905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-818905/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-818905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:03:19.475035   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:03:19.475064   17479 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:03:19.475105   17479 ubuntu.go:177] setting up certificates
	I1212 22:03:19.475116   17479 provision.go:83] configureAuth start
	I1212 22:03:19.475165   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:19.491346   17479 provision.go:138] copyHostCerts
	I1212 22:03:19.491418   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:03:19.491571   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:03:19.491774   17479 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:03:19.491895   17479 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.addons-818905 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-818905]
	I1212 22:03:19.556879   17479 provision.go:172] copyRemoteCerts
	I1212 22:03:19.556942   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:03:19.556972   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.573323   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:19.663758   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 22:03:19.684477   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:03:19.704958   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:03:19.725486   17479 provision.go:86] duration metric: configureAuth took 250.355277ms
	I1212 22:03:19.725516   17479 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:03:19.725691   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:19.725804   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.741380   17479 main.go:141] libmachine: Using SSH client type: native
	I1212 22:03:19.741744   17479 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1212 22:03:19.741766   17479 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:03:19.944446   17479 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:03:19.944471   17479 machine.go:91] provisioned docker machine in 3.754252641s
	I1212 22:03:19.944482   17479 client.go:171] LocalClient.Create took 12.653879994s
	I1212 22:03:19.944501   17479 start.go:167] duration metric: libmachine.API.Create for "addons-818905" took 12.653946454s
	I1212 22:03:19.944514   17479 start.go:300] post-start starting for "addons-818905" (driver="docker")
	I1212 22:03:19.944532   17479 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:03:19.944590   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:03:19.944635   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:19.962470   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.051667   17479 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:03:20.054478   17479 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:03:20.054505   17479 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:03:20.054513   17479 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:03:20.054520   17479 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 22:03:20.054528   17479 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:03:20.054577   17479 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:03:20.054598   17479 start.go:303] post-start completed in 110.077707ms
	I1212 22:03:20.054830   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:20.071099   17479 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/config.json ...
	I1212 22:03:20.071315   17479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:03:20.071376   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.085574   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.171751   17479 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:03:20.175625   17479 start.go:128] duration metric: createHost completed in 12.966458256s
	I1212 22:03:20.175650   17479 start.go:83] releasing machines lock for "addons-818905", held for 12.966580453s
	I1212 22:03:20.175717   17479 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-818905
	I1212 22:03:20.191539   17479 ssh_runner.go:195] Run: cat /version.json
	I1212 22:03:20.191616   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.191665   17479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:03:20.191730   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:20.208989   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.209596   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:20.382225   17479 ssh_runner.go:195] Run: systemctl --version
	I1212 22:03:20.385937   17479 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:03:20.519348   17479 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:03:20.523296   17479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:03:20.540115   17479 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:03:20.540188   17479 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:03:20.565506   17479 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 22:03:20.565541   17479 start.go:475] detecting cgroup driver to use...
	I1212 22:03:20.565572   17479 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:03:20.565608   17479 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:03:20.578209   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:03:20.587312   17479 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:03:20.587384   17479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:03:20.598406   17479 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:03:20.610319   17479 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:03:20.691564   17479 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:03:20.768294   17479 docker.go:219] disabling docker service ...
	I1212 22:03:20.768376   17479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:03:20.784354   17479 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:03:20.793850   17479 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:03:20.857767   17479 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:03:20.932360   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:03:20.942532   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:03:20.955958   17479 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:03:20.956009   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.964362   17479 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:03:20.964412   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.972229   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.980349   17479 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:03:20.988305   17479 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:03:20.995944   17479 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:03:21.002558   17479 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:03:21.009458   17479 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:03:21.082866   17479 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:03:21.185364   17479 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:03:21.185438   17479 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:03:21.188798   17479 start.go:543] Will wait 60s for crictl version
	I1212 22:03:21.188843   17479 ssh_runner.go:195] Run: which crictl
	I1212 22:03:21.191869   17479 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:03:21.223224   17479 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 22:03:21.223376   17479 ssh_runner.go:195] Run: crio --version
	I1212 22:03:21.256094   17479 ssh_runner.go:195] Run: crio --version
	I1212 22:03:21.288418   17479 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 22:03:21.289937   17479 cli_runner.go:164] Run: docker network inspect addons-818905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:03:21.307137   17479 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 22:03:21.310543   17479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:21.320390   17479 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:03:21.320436   17479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:21.371859   17479 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:03:21.371882   17479 crio.go:415] Images already preloaded, skipping extraction
	I1212 22:03:21.371931   17479 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:03:21.401469   17479 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:03:21.401493   17479 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:03:21.401557   17479 ssh_runner.go:195] Run: crio config
	I1212 22:03:21.440150   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:03:21.440170   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:03:21.440184   17479 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:03:21.440200   17479 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-818905 NodeName:addons-818905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:03:21.440333   17479 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-818905"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:03:21.440395   17479 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-818905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:03:21.440440   17479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:03:21.448194   17479 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:03:21.448251   17479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:03:21.455949   17479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1212 22:03:21.470707   17479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:03:21.485502   17479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1212 22:03:21.499965   17479 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 22:03:21.502696   17479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:03:21.512025   17479 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905 for IP: 192.168.49.2
	I1212 22:03:21.512053   17479 certs.go:190] acquiring lock for shared ca certs: {Name:mkef1e7b14f91e4f04d1e9cbbafdc8c42ba43b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.512169   17479 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key
	I1212 22:03:21.710544   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt ...
	I1212 22:03:21.710574   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt: {Name:mk0e0fd4d038396d1fc2bf31caea05ecaf29aaee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.710732   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key ...
	I1212 22:03:21.710742   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key: {Name:mk674a523a317792bfd36cd4ddb9c74a608f21e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.710808   17479 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key
	I1212 22:03:21.951207   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt ...
	I1212 22:03:21.951239   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt: {Name:mkd352921ab3d8753775b65aa4c4e555b772bbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.951401   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key ...
	I1212 22:03:21.951411   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key: {Name:mka56f39e927e7d629de44160a204845d1ed44d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:21.951511   17479 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key
	I1212 22:03:21.951530   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt with IP's: []
	I1212 22:03:22.127869   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt ...
	I1212 22:03:22.127897   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: {Name:mk42354b39cf88ed47b666838f3906fc0059e663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.128042   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key ...
	I1212 22:03:22.128052   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.key: {Name:mk4c69e12ba3cdd1bb99f1f7b7bcec542ea8ed07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.128126   17479 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2
	I1212 22:03:22.128142   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:03:22.354306   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 ...
	I1212 22:03:22.354336   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2: {Name:mkf0a5ffe152afd7dc2ae2432b33e4908560bb02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.354484   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2 ...
	I1212 22:03:22.354498   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2: {Name:mke7664a0e83520068579d72203d39d736831243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.354560   17479 certs.go:337] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt
	I1212 22:03:22.354620   17479 certs.go:341] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key
	I1212 22:03:22.354661   17479 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key
	I1212 22:03:22.354676   17479 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt with IP's: []
	I1212 22:03:22.595702   17479 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt ...
	I1212 22:03:22.595729   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt: {Name:mk7296c907a900eb3a472d81afb4b249f7081624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.595877   17479 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key ...
	I1212 22:03:22.595887   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key: {Name:mkcf27d57e94511b1f7c217dfcc80b2ae395803b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:22.596045   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 22:03:22.596079   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:03:22.596098   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:03:22.596129   17479 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem (1675 bytes)
	I1212 22:03:22.596695   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:03:22.617968   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 22:03:22.638254   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:03:22.658123   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:03:22.678442   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:03:22.698139   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:03:22.717686   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:03:22.737245   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:03:22.757440   17479 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:03:22.776860   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:03:22.791189   17479 ssh_runner.go:195] Run: openssl version
	I1212 22:03:22.796246   17479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:03:22.803700   17479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.806582   17479 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.806640   17479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:03:22.812476   17479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:03:22.819797   17479 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:03:22.822408   17479 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:03:22.822445   17479 kubeadm.go:404] StartCluster: {Name:addons-818905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-818905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:03:22.822536   17479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:03:22.822583   17479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:03:22.853248   17479 cri.go:89] found id: ""
	I1212 22:03:22.853301   17479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:03:22.862317   17479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:03:22.869395   17479 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 22:03:22.869443   17479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:03:22.876490   17479 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:03:22.876527   17479 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 22:03:22.948632   17479 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:03:23.005795   17479 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:03:31.823842   17479 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:03:31.823936   17479 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:03:31.824081   17479 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:03:31.824175   17479 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:03:31.824230   17479 kubeadm.go:322] OS: Linux
	I1212 22:03:31.824284   17479 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 22:03:31.824344   17479 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 22:03:31.824408   17479 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 22:03:31.824468   17479 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 22:03:31.824544   17479 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 22:03:31.824619   17479 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 22:03:31.824681   17479 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 22:03:31.824748   17479 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 22:03:31.824820   17479 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 22:03:31.824915   17479 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:03:31.825048   17479 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:03:31.825170   17479 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:03:31.825247   17479 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:03:31.827090   17479 out.go:204]   - Generating certificates and keys ...
	I1212 22:03:31.827199   17479 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:03:31.827302   17479 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:03:31.827400   17479 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:03:31.827501   17479 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:03:31.827611   17479 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:03:31.827711   17479 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:03:31.827795   17479 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:03:31.827951   17479 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-818905 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:03:31.828047   17479 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:03:31.828224   17479 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-818905 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:03:31.828316   17479 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:03:31.828410   17479 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:03:31.828472   17479 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:03:31.828541   17479 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:03:31.828606   17479 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:03:31.828678   17479 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:03:31.828768   17479 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:03:31.828844   17479 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:03:31.828955   17479 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:03:31.829037   17479 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:03:31.830885   17479 out.go:204]   - Booting up control plane ...
	I1212 22:03:31.830991   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:03:31.831087   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:03:31.831174   17479 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:03:31.831346   17479 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:03:31.831464   17479 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:03:31.831517   17479 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:03:31.831772   17479 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:03:31.831871   17479 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.502555 seconds
	I1212 22:03:31.832033   17479 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:03:31.832211   17479 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:03:31.832299   17479 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:03:31.832547   17479 kubeadm.go:322] [mark-control-plane] Marking the node addons-818905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:03:31.832628   17479 kubeadm.go:322] [bootstrap-token] Using token: weiybl.9w3njaxhbwpige62
	I1212 22:03:31.834067   17479 out.go:204]   - Configuring RBAC rules ...
	I1212 22:03:31.834195   17479 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:03:31.834295   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:03:31.834480   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:03:31.834622   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:03:31.834777   17479 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:03:31.834882   17479 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:03:31.834971   17479 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:03:31.835006   17479 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:03:31.835043   17479 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:03:31.835048   17479 kubeadm.go:322] 
	I1212 22:03:31.835091   17479 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:03:31.835097   17479 kubeadm.go:322] 
	I1212 22:03:31.835153   17479 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:03:31.835159   17479 kubeadm.go:322] 
	I1212 22:03:31.835217   17479 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:03:31.835300   17479 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:03:31.835363   17479 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:03:31.835373   17479 kubeadm.go:322] 
	I1212 22:03:31.835433   17479 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:03:31.835443   17479 kubeadm.go:322] 
	I1212 22:03:31.835514   17479 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:03:31.835527   17479 kubeadm.go:322] 
	I1212 22:03:31.835626   17479 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:03:31.835727   17479 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:03:31.835822   17479 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:03:31.835833   17479 kubeadm.go:322] 
	I1212 22:03:31.835943   17479 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:03:31.836078   17479 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:03:31.836093   17479 kubeadm.go:322] 
	I1212 22:03:31.836177   17479 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token weiybl.9w3njaxhbwpige62 \
	I1212 22:03:31.836265   17479 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f \
	I1212 22:03:31.836287   17479 kubeadm.go:322] 	--control-plane 
	I1212 22:03:31.836293   17479 kubeadm.go:322] 
	I1212 22:03:31.836360   17479 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:03:31.836367   17479 kubeadm.go:322] 
	I1212 22:03:31.836431   17479 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token weiybl.9w3njaxhbwpige62 \
	I1212 22:03:31.836599   17479 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:03:31.836620   17479 cni.go:84] Creating CNI manager for ""
	I1212 22:03:31.836626   17479 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:03:31.838215   17479 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:03:31.839497   17479 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:03:31.842711   17479 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:03:31.842725   17479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:03:31.857812   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:03:32.483281   17479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:03:32.483323   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.483340   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=addons-818905 minikube.k8s.io/updated_at=2023_12_12T22_03_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.556750   17479 ops.go:34] apiserver oom_adj: -16
	I1212 22:03:32.556945   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:32.617952   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:33.178601   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:33.678196   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:34.178830   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:34.678852   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:35.178960   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:35.678248   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:36.178160   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:36.678163   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:37.178029   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:37.678298   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:38.178947   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:38.678830   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:39.178362   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:39.678189   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:40.178774   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:40.678496   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:41.178704   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:41.679048   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:42.178716   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:42.678572   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:43.178545   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:43.678950   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.178249   17479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:03:44.241022   17479 kubeadm.go:1088] duration metric: took 11.75774238s to wait for elevateKubeSystemPrivileges.
	I1212 22:03:44.241061   17479 kubeadm.go:406] StartCluster complete in 21.41861949s
	I1212 22:03:44.241083   17479 settings.go:142] acquiring lock: {Name:mk857225ea2f0544984670c00dbb01f431ce59c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:44.241195   17479 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:03:44.241542   17479 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/kubeconfig: {Name:mkd3e8de36f0003ff040c445ac6e47a46685daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:03:44.241704   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:03:44.241777   17479 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1212 22:03:44.241863   17479 addons.go:69] Setting volumesnapshots=true in profile "addons-818905"
	I1212 22:03:44.241874   17479 addons.go:69] Setting ingress-dns=true in profile "addons-818905"
	I1212 22:03:44.241887   17479 addons.go:231] Setting addon volumesnapshots=true in "addons-818905"
	I1212 22:03:44.241889   17479 addons.go:69] Setting default-storageclass=true in profile "addons-818905"
	I1212 22:03:44.241904   17479 addons.go:231] Setting addon ingress-dns=true in "addons-818905"
	I1212 22:03:44.241912   17479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-818905"
	I1212 22:03:44.241937   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.241912   17479 addons.go:69] Setting helm-tiller=true in profile "addons-818905"
	I1212 22:03:44.241947   17479 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-818905"
	I1212 22:03:44.241966   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.241976   17479 addons.go:69] Setting gcp-auth=true in profile "addons-818905"
	I1212 22:03:44.241997   17479 mustload.go:65] Loading cluster: addons-818905
	I1212 22:03:44.242023   17479 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-818905"
	I1212 22:03:44.242074   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.242229   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:44.242280   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242456   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242465   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242486   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242515   17479 addons.go:69] Setting metrics-server=true in profile "addons-818905"
	I1212 22:03:44.242530   17479 addons.go:231] Setting addon metrics-server=true in "addons-818905"
	I1212 22:03:44.242600   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.242866   17479 addons.go:69] Setting storage-provisioner=true in profile "addons-818905"
	I1212 22:03:44.242883   17479 addons.go:231] Setting addon storage-provisioner=true in "addons-818905"
	I1212 22:03:44.242931   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243031   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243174   17479 addons.go:69] Setting registry=true in profile "addons-818905"
	I1212 22:03:44.243203   17479 addons.go:231] Setting addon registry=true in "addons-818905"
	I1212 22:03:44.243244   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243478   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243745   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.243856   17479 addons.go:69] Setting ingress=true in profile "addons-818905"
	I1212 22:03:44.243875   17479 addons.go:231] Setting addon ingress=true in "addons-818905"
	I1212 22:03:44.243909   17479 addons.go:69] Setting cloud-spanner=true in profile "addons-818905"
	I1212 22:03:44.243942   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.243949   17479 addons.go:231] Setting addon cloud-spanner=true in "addons-818905"
	I1212 22:03:44.244028   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.244400   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.244474   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.241970   17479 addons.go:231] Setting addon helm-tiller=true in "addons-818905"
	I1212 22:03:44.245339   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.245798   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.241936   17479 config.go:182] Loaded profile config "addons-818905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:03:44.242510   17479 addons.go:69] Setting inspektor-gadget=true in profile "addons-818905"
	I1212 22:03:44.246084   17479 addons.go:231] Setting addon inspektor-gadget=true in "addons-818905"
	I1212 22:03:44.246145   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.246554   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.247399   17479 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-818905"
	I1212 22:03:44.242489   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.242503   17479 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-818905"
	I1212 22:03:44.253738   17479 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-818905"
	I1212 22:03:44.253819   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.254340   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.255718   17479 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-818905"
	I1212 22:03:44.256097   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.277171   17479 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1212 22:03:44.279397   17479 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:44.279419   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1212 22:03:44.279475   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.282332   17479 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:03:44.283890   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 22:03:44.283733   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 22:03:44.283742   17479 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1212 22:03:44.283857   17479 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:44.285773   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 22:03:44.285890   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:03:44.286601   17479 addons.go:231] Setting addon default-storageclass=true in "addons-818905"
	I1212 22:03:44.287595   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.287774   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1212 22:03:44.287792   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1212 22:03:44.287846   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.288011   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 22:03:44.288019   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 22:03:44.288052   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.288124   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.288185   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 22:03:44.288274   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.292686   17479 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1212 22:03:44.294908   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 22:03:44.294929   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 22:03:44.294996   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.291164   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 22:03:44.296566   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 22:03:44.298040   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 22:03:44.299674   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:44.299644   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 22:03:44.301358   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1212 22:03:44.304008   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:44.302682   17479 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 22:03:44.306677   17479 out.go:177]   - Using image docker.io/registry:2.8.3
	I1212 22:03:44.305810   17479 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:44.307903   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1212 22:03:44.309160   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.309230   17479 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-818905" context rescaled to 1 replicas
	I1212 22:03:44.309264   17479 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:03:44.310468   17479 out.go:177] * Verifying Kubernetes components...
	I1212 22:03:44.309348   17479 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1212 22:03:44.309406   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 22:03:44.309414   17479 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1212 22:03:44.312748   17479 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1212 22:03:44.311685   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:03:44.311697   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 22:03:44.314254   17479 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:44.314831   17479 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:44.314869   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316141   17479 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 22:03:44.316154   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1212 22:03:44.316271   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 22:03:44.316306   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 22:03:44.316344   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316526   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.316672   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.321615   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.335671   17479 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-818905"
	I1212 22:03:44.335718   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.336680   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:44.343629   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.345634   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.348517   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:44.350122   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.369305   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.382468   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.384304   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.390569   17479 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1212 22:03:44.389613   17479 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:44.390603   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:03:44.390658   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.392113   17479 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1212 22:03:44.392133   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1212 22:03:44.392179   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.391058   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.394101   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.396466   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.399512   17479 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 22:03:44.400771   17479 out.go:177]   - Using image docker.io/busybox:stable
	I1212 22:03:44.402165   17479 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:44.402186   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 22:03:44.402242   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:44.406883   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.409375   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	W1212 22:03:44.420535   17479 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 22:03:44.420565   17479 retry.go:31] will retry after 223.935691ms: ssh: handshake failed: EOF
	I1212 22:03:44.420698   17479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:03:44.421091   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:44.421748   17479 node_ready.go:35] waiting up to 6m0s for node "addons-818905" to be "Ready" ...
	I1212 22:03:44.620512   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 22:03:44.725093   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:03:44.726772   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 22:03:44.726800   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 22:03:44.729801   17479 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 22:03:44.729829   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 22:03:44.737621   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 22:03:44.737657   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 22:03:44.819039   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 22:03:44.824135   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 22:03:44.829488   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 22:03:44.839323   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 22:03:44.839350   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 22:03:44.930219   17479 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:44.930246   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 22:03:44.931426   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1212 22:03:44.931446   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1212 22:03:45.019108   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 22:03:45.019140   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 22:03:45.028038   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:03:45.036984   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 22:03:45.037017   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 22:03:45.128421   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 22:03:45.134182   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 22:03:45.134207   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 22:03:45.135538   17479 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 22:03:45.135567   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 22:03:45.323593   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 22:03:45.420844   17479 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:45.420882   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 22:03:45.427733   17479 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:45.427811   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1212 22:03:45.428060   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 22:03:45.428097   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 22:03:45.617325   17479 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1212 22:03:45.617355   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1212 22:03:45.635521   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 22:03:45.635599   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 22:03:45.636295   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 22:03:45.831671   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 22:03:45.831767   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 22:03:46.018648   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1212 22:03:46.024735   17479 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:46.024822   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 22:03:46.218654   17479 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1212 22:03:46.218684   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1212 22:03:46.240281   17479 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 22:03:46.240323   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 22:03:46.518377   17479 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1212 22:03:46.518462   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1212 22:03:46.530104   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:46.533961   17479 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.113230679s)
	I1212 22:03:46.534005   17479 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 22:03:46.633471   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:46.728484   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 22:03:46.728565   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 22:03:46.935340   17479 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1212 22:03:46.935420   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1212 22:03:47.317019   17479 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1212 22:03:47.317117   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1212 22:03:47.424861   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 22:03:47.424900   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 22:03:47.624578   17479 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1212 22:03:47.624611   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1212 22:03:47.720500   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 22:03:47.720530   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 22:03:47.929777   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 22:03:47.929805   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 22:03:48.117257   17479 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:48.117286   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1212 22:03:48.138266   17479 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:03:48.138292   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 22:03:48.438589   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 22:03:48.531973   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1212 22:03:48.819986   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.199419648s)
	I1212 22:03:49.031210   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:49.521074   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.795934485s)
	I1212 22:03:49.521157   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.702079186s)
	I1212 22:03:49.521209   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.6970475s)
	I1212 22:03:50.616928   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.787401721s)
	I1212 22:03:50.616973   17479 addons.go:467] Verifying addon ingress=true in "addons-818905"
	I1212 22:03:50.616995   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.588921069s)
	I1212 22:03:50.618525   17479 out.go:177] * Verifying ingress addon...
	I1212 22:03:50.617072   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.488617244s)
	I1212 22:03:50.617125   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.29349408s)
	I1212 22:03:50.617201   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.980802411s)
	I1212 22:03:50.617264   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.598519196s)
	I1212 22:03:50.617447   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.087304143s)
	W1212 22:03:50.620378   17479 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:03:50.620391   17479 addons.go:467] Verifying addon registry=true in "addons-818905"
	I1212 22:03:50.620405   17479 retry.go:31] will retry after 179.516209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 22:03:50.620406   17479 addons.go:467] Verifying addon metrics-server=true in "addons-818905"
	I1212 22:03:50.622389   17479 out.go:177] * Verifying registry addon...
	I1212 22:03:50.621287   17479 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 22:03:50.624602   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 22:03:50.628779   17479 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 22:03:50.628807   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 22:03:50.629725   17479 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 22:03:50.630305   17479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:03:50.630323   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:50.632720   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:50.634397   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:50.801071   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 22:03:51.136478   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:51.138048   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:51.154694   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 22:03:51.154781   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:51.174436   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:51.333345   17479 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 22:03:51.351068   17479 addons.go:231] Setting addon gcp-auth=true in "addons-818905"
	I1212 22:03:51.351126   17479 host.go:66] Checking if "addons-818905" exists ...
	I1212 22:03:51.351638   17479 cli_runner.go:164] Run: docker container inspect addons-818905 --format={{.State.Status}}
	I1212 22:03:51.372837   17479 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 22:03:51.372879   17479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-818905
	I1212 22:03:51.388380   17479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/addons-818905/id_rsa Username:docker}
	I1212 22:03:51.518577   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:51.621192   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.182530675s)
	I1212 22:03:51.621238   17479 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-818905"
	I1212 22:03:51.621268   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.089229502s)
	I1212 22:03:51.623274   17479 out.go:177] * Verifying csi-hostpath-driver addon...
	I1212 22:03:51.626268   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 22:03:51.631660   17479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:03:51.631724   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:51.635113   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:51.636420   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:51.637584   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:51.934732   17479 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.133612159s)
	I1212 22:03:51.937403   17479 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1212 22:03:51.938869   17479 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1212 22:03:51.940281   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 22:03:51.940302   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 22:03:51.956441   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 22:03:51.956467   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 22:03:51.971241   17479 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:03:51.971263   17479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1212 22:03:51.985967   17479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 22:03:52.136500   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:52.138486   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:52.138614   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:52.361517   17479 addons.go:467] Verifying addon gcp-auth=true in "addons-818905"
	I1212 22:03:52.363268   17479 out.go:177] * Verifying gcp-auth addon...
	I1212 22:03:52.365445   17479 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 22:03:52.367995   17479 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 22:03:52.368012   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:52.372084   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:52.637145   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:52.638214   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:52.639002   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:52.876195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:53.138146   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:53.138655   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:53.140090   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:53.376424   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:53.639521   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:53.640681   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:53.642443   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:53.918666   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:54.016702   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:54.137716   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:54.139941   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:54.141315   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:54.417408   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:54.637691   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:54.639080   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:54.639494   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:54.875474   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:55.136970   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:55.139619   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:55.139791   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:55.376112   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:55.637398   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:55.638391   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:55.639327   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:55.875503   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:56.136311   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:56.138959   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:56.139147   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:56.376156   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:56.451567   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:56.637246   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:56.639479   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:56.639599   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:56.875355   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:57.136238   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:57.138195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:57.138663   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:57.375483   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:57.637380   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:57.638172   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:57.639258   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:57.874916   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.137371   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:58.138153   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:58.138920   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:58.375688   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.636369   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:58.640881   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:58.640969   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:58.875795   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:58.951377   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:03:59.137310   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:59.138814   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:59.138927   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:59.375975   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:03:59.636694   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:03:59.638787   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:03:59.639015   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:03:59.875475   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:00.136849   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:00.139542   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:00.139740   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:00.375557   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:00.636299   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:00.638565   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:00.638693   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:00.875894   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:01.136696   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:01.138657   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:01.139192   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:01.375107   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:01.451490   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:01.637472   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:01.641281   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:01.642375   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:01.875383   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:02.137287   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:02.138118   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:02.138912   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:02.375443   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:02.637450   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:02.638043   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:02.639175   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:02.876060   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.136709   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:03.138460   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:03.139008   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:03.375434   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.637308   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:03.638396   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:03.639142   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:03.875818   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:03.951202   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:04.137090   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:04.139374   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:04.139635   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:04.375876   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:04.636626   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:04.638974   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:04.639061   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:04.875788   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:05.136905   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:05.138482   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:05.138615   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:05.375542   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:05.637275   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:05.638719   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:05.641002   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:05.875709   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:06.136438   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.138873   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.138963   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:06.375677   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:06.451098   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:06.636570   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:06.638738   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:06.638937   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:06.875893   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:07.136661   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.138569   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:07.138666   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.375626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:07.636244   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:07.638416   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:07.638596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:07.875531   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:08.137220   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.138059   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.139195   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.374954   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:08.451265   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:08.636932   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:08.638924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:08.639056   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:08.875369   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:09.137703   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.138167   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.139119   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.375128   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:09.637035   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:09.637981   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:09.638765   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:09.875672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.136314   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.138560   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.138642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:10.375226   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:10.451510   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:10.637163   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:10.637871   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:10.638935   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:10.875534   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.137129   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.138013   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.138998   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.375767   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:11.637018   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:11.637846   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:11.638595   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:11.875596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.136491   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.138466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.138485   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.375024   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.637107   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:12.638523   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:12.640401   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:12.875119   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:12.951518   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:13.137052   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.137891   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.138803   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.375428   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:13.637178   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:13.637825   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:13.638957   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:13.875645   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.136281   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.138385   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.138675   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.375694   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:14.636550   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:14.638693   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:14.638800   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:14.875759   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.136305   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.138362   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.138449   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.375231   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:15.451587   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:15.637404   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:15.637753   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:15.638866   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:15.875280   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.137372   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.138246   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.139171   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.375066   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:16.637554   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:16.637839   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:16.639034   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:16.875648   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.136463   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.138182   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.138456   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:17.374924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.636872   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:17.638819   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:17.639247   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:17.874961   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:17.951163   17479 node_ready.go:58] node "addons-818905" has status "Ready":"False"
	I1212 22:04:18.136227   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.138524   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.138678   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.420951   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:18.450906   17479 node_ready.go:49] node "addons-818905" has status "Ready":"True"
	I1212 22:04:18.450932   17479 node_ready.go:38] duration metric: took 34.029159082s waiting for node "addons-818905" to be "Ready" ...
	I1212 22:04:18.450943   17479 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:04:18.457994   17479 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:18.640662   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:18.641704   17479 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 22:04:18.641722   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:18.642089   17479 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 22:04:18.642103   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:18.919324   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.137217   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.142669   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.143761   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.375025   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:19.636541   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:19.639089   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:19.639545   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:19.875608   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.136554   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.139240   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.139428   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.375924   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:20.520166   17479 pod_ready.go:92] pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.520188   17479 pod_ready.go:81] duration metric: took 2.062171329s waiting for pod "coredns-5dd5756b68-h4rvx" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.520209   17479 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.525071   17479 pod_ready.go:92] pod "etcd-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.525088   17479 pod_ready.go:81] duration metric: took 4.874289ms waiting for pod "etcd-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.525099   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.529946   17479 pod_ready.go:92] pod "kube-apiserver-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.529968   17479 pod_ready.go:81] duration metric: took 4.863853ms waiting for pod "kube-apiserver-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.529980   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.534738   17479 pod_ready.go:92] pod "kube-controller-manager-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.534761   17479 pod_ready.go:81] duration metric: took 4.770738ms waiting for pod "kube-controller-manager-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.534775   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bl7tf" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.638066   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:20.638642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:20.639642   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:20.851901   17479 pod_ready.go:92] pod "kube-proxy-bl7tf" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:20.851925   17479 pod_ready.go:81] duration metric: took 317.142701ms waiting for pod "kube-proxy-bl7tf" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.851934   17479 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:20.875478   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.137259   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.138494   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.139498   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.251459   17479 pod_ready.go:92] pod "kube-scheduler-addons-818905" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:21.251484   17479 pod_ready.go:81] duration metric: took 399.543881ms waiting for pod "kube-scheduler-addons-818905" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:21.251496   17479 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:21.374841   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:21.636871   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:21.639304   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:21.639596   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:21.874995   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.141881   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.142438   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.143641   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:22.417399   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:22.637537   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:22.639817   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:22.640734   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:22.876158   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.136617   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.139729   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.140634   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.376109   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:23.557485   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:23.637145   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:23.639512   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:23.641074   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:23.875358   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.136471   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.138906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.139671   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.375953   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:24.639501   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:24.640247   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:24.641445   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:24.875720   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.137774   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.139190   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.141672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.420212   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:25.620475   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:25.639462   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:25.640278   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:25.644526   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:25.917297   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.137296   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.139804   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.139906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.376056   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:26.637169   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:26.639268   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:26.639800   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:26.876422   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.137416   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.138395   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.139529   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.375830   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:27.637148   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:27.638458   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:27.639619   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:27.875990   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.058269   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:28.217242   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.218003   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.220403   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.376094   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:28.637127   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:28.639730   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:28.640129   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:28.875824   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.137243   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.139676   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.140434   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:29.375981   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:29.638032   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:29.639030   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:29.640261   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:29.876000   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.058937   17479 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:30.137917   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.139230   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.140201   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.375221   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:30.637463   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:30.638973   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:30.640498   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:30.875597   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.136725   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.139421   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.140316   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.376759   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:31.557990   17479 pod_ready.go:92] pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:31.558013   17479 pod_ready.go:81] duration metric: took 10.306510173s waiting for pod "metrics-server-7c66d45ddc-xt6xh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:31.558025   17479 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:31.637333   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:31.639588   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:31.640155   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:31.875233   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.137310   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.138510   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.139840   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.417672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:32.644528   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:32.644748   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:32.645540   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:32.917839   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.139144   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.222876   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.239140   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.421033   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:33.621375   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:33.637305   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:33.639668   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:33.640592   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:33.876174   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.137092   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.138667   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.140491   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.375046   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:34.637509   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:34.638838   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:34.640270   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:34.875472   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.137709   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.139030   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.140160   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.376114   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:35.638561   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:35.639414   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:35.640906   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:35.875903   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.072270   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:36.136771   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.139383   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.139829   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.375879   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:36.637574   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:36.638915   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:36.640057   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:36.875626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.136402   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.139923   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.141626   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:37.375286   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:37.637700   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:37.638804   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:37.640548   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:37.875836   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.137223   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.141920   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.142750   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.419249   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:38.622062   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:38.637376   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:38.643408   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:38.644033   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:38.919609   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.136968   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.140658   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.141414   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.417652   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:39.637119   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:39.640466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:39.640500   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:39.875672   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.137640   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.142644   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.143444   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:40.375858   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:40.637834   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:40.640036   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:40.640791   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:40.917132   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.118216   17479 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"False"
	I1212 22:04:41.136205   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.139778   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.140483   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.375670   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:41.637437   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:41.639418   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:41.640274   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:41.875634   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.137355   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.139388   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.140341   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.375761   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:42.637556   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:42.639524   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:42.639947   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:42.875262   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.136966   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.138426   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.139662   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.376508   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:43.573369   17479 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace has status "Ready":"True"
	I1212 22:04:43.573399   17479 pod_ready.go:81] duration metric: took 12.01536683s waiting for pod "nvidia-device-plugin-daemonset-jc5wh" in "kube-system" namespace to be "Ready" ...
	I1212 22:04:43.573423   17479 pod_ready.go:38] duration metric: took 25.122466108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:04:43.573441   17479 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:04:43.573501   17479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:04:43.628809   17479 api_server.go:72] duration metric: took 59.319505035s to wait for apiserver process to appear ...
	I1212 22:04:43.628835   17479 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:04:43.628859   17479 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 22:04:43.633910   17479 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 22:04:43.635293   17479 api_server.go:141] control plane version: v1.28.4
	I1212 22:04:43.635320   17479 api_server.go:131] duration metric: took 6.477326ms to wait for apiserver health ...
	I1212 22:04:43.635330   17479 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:04:43.637237   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:43.639595   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:43.640743   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:43.646002   17479 system_pods.go:59] 19 kube-system pods found
	I1212 22:04:43.646028   17479 system_pods.go:61] "coredns-5dd5756b68-h4rvx" [99f8f7ef-0255-46a3-801b-21f77c515e1d] Running
	I1212 22:04:43.646039   17479 system_pods.go:61] "csi-hostpath-attacher-0" [f7ee8827-d6c3-4986-b7bb-c26ab9650a7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:04:43.646048   17479 system_pods.go:61] "csi-hostpath-resizer-0" [1b13b2d7-3881-43ca-9e47-ddc885f26185] Running
	I1212 22:04:43.646061   17479 system_pods.go:61] "csi-hostpathplugin-lstwr" [2d937fcd-bb06-4668-9bfe-27c070954c6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:43.646075   17479 system_pods.go:61] "etcd-addons-818905" [7e1ccaa3-a0d1-4c82-abd8-ea89ae6385fe] Running
	I1212 22:04:43.646081   17479 system_pods.go:61] "kindnet-m2vln" [7dcc107f-df44-4d34-973b-16843b605e9d] Running
	I1212 22:04:43.646085   17479 system_pods.go:61] "kube-apiserver-addons-818905" [b22f7406-f5fb-48ca-b61a-8d06485c07b6] Running
	I1212 22:04:43.646089   17479 system_pods.go:61] "kube-controller-manager-addons-818905" [4732b038-170a-4fab-a6ef-6a7ce76a8c88] Running
	I1212 22:04:43.646097   17479 system_pods.go:61] "kube-ingress-dns-minikube" [bbda5a3b-e96f-420f-9b8f-95922e769a8d] Running
	I1212 22:04:43.646102   17479 system_pods.go:61] "kube-proxy-bl7tf" [7627c43a-311d-4224-a91e-279a1531c679] Running
	I1212 22:04:43.646107   17479 system_pods.go:61] "kube-scheduler-addons-818905" [31aa08dc-c698-4da7-a503-dbbfed58f4f3] Running
	I1212 22:04:43.646111   17479 system_pods.go:61] "metrics-server-7c66d45ddc-xt6xh" [37df71e4-7ba7-496c-b885-921e393df60e] Running
	I1212 22:04:43.646117   17479 system_pods.go:61] "nvidia-device-plugin-daemonset-jc5wh" [061520bc-edd5-47af-9f5a-ba1bfb03e15e] Running
	I1212 22:04:43.646121   17479 system_pods.go:61] "registry-5g6k8" [fc7ebc27-babc-48dc-928d-1b1782ea01ea] Running
	I1212 22:04:43.646126   17479 system_pods.go:61] "registry-proxy-9wc4f" [957915c2-6516-426f-b900-6143af5f0982] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:43.646135   17479 system_pods.go:61] "snapshot-controller-58dbcc7b99-csqbk" [bf6ed58e-0666-4b13-8e54-17ae93b960ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:43.646140   17479 system_pods.go:61] "snapshot-controller-58dbcc7b99-td5gh" [7def081e-409c-4e81-9f5c-267d406f0319] Running
	I1212 22:04:43.646147   17479 system_pods.go:61] "storage-provisioner" [dde8762e-2658-4b5a-b9fb-284579c6615b] Running
	I1212 22:04:43.646151   17479 system_pods.go:61] "tiller-deploy-7b677967b9-8vj4p" [1ba59c52-d351-4cfa-8c97-733b952603c2] Running
	I1212 22:04:43.646158   17479 system_pods.go:74] duration metric: took 10.822488ms to wait for pod list to return data ...
	I1212 22:04:43.646164   17479 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:04:43.647953   17479 default_sa.go:45] found service account: "default"
	I1212 22:04:43.647968   17479 default_sa.go:55] duration metric: took 1.796604ms for default service account to be created ...
	I1212 22:04:43.647974   17479 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:04:43.658927   17479 system_pods.go:86] 19 kube-system pods found
	I1212 22:04:43.658949   17479 system_pods.go:89] "coredns-5dd5756b68-h4rvx" [99f8f7ef-0255-46a3-801b-21f77c515e1d] Running
	I1212 22:04:43.658961   17479 system_pods.go:89] "csi-hostpath-attacher-0" [f7ee8827-d6c3-4986-b7bb-c26ab9650a7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 22:04:43.658968   17479 system_pods.go:89] "csi-hostpath-resizer-0" [1b13b2d7-3881-43ca-9e47-ddc885f26185] Running
	I1212 22:04:43.658982   17479 system_pods.go:89] "csi-hostpathplugin-lstwr" [2d937fcd-bb06-4668-9bfe-27c070954c6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 22:04:43.658994   17479 system_pods.go:89] "etcd-addons-818905" [7e1ccaa3-a0d1-4c82-abd8-ea89ae6385fe] Running
	I1212 22:04:43.659004   17479 system_pods.go:89] "kindnet-m2vln" [7dcc107f-df44-4d34-973b-16843b605e9d] Running
	I1212 22:04:43.659015   17479 system_pods.go:89] "kube-apiserver-addons-818905" [b22f7406-f5fb-48ca-b61a-8d06485c07b6] Running
	I1212 22:04:43.659026   17479 system_pods.go:89] "kube-controller-manager-addons-818905" [4732b038-170a-4fab-a6ef-6a7ce76a8c88] Running
	I1212 22:04:43.659036   17479 system_pods.go:89] "kube-ingress-dns-minikube" [bbda5a3b-e96f-420f-9b8f-95922e769a8d] Running
	I1212 22:04:43.659046   17479 system_pods.go:89] "kube-proxy-bl7tf" [7627c43a-311d-4224-a91e-279a1531c679] Running
	I1212 22:04:43.659055   17479 system_pods.go:89] "kube-scheduler-addons-818905" [31aa08dc-c698-4da7-a503-dbbfed58f4f3] Running
	I1212 22:04:43.659065   17479 system_pods.go:89] "metrics-server-7c66d45ddc-xt6xh" [37df71e4-7ba7-496c-b885-921e393df60e] Running
	I1212 22:04:43.659074   17479 system_pods.go:89] "nvidia-device-plugin-daemonset-jc5wh" [061520bc-edd5-47af-9f5a-ba1bfb03e15e] Running
	I1212 22:04:43.659083   17479 system_pods.go:89] "registry-5g6k8" [fc7ebc27-babc-48dc-928d-1b1782ea01ea] Running
	I1212 22:04:43.659093   17479 system_pods.go:89] "registry-proxy-9wc4f" [957915c2-6516-426f-b900-6143af5f0982] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 22:04:43.659107   17479 system_pods.go:89] "snapshot-controller-58dbcc7b99-csqbk" [bf6ed58e-0666-4b13-8e54-17ae93b960ee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 22:04:43.659118   17479 system_pods.go:89] "snapshot-controller-58dbcc7b99-td5gh" [7def081e-409c-4e81-9f5c-267d406f0319] Running
	I1212 22:04:43.659129   17479 system_pods.go:89] "storage-provisioner" [dde8762e-2658-4b5a-b9fb-284579c6615b] Running
	I1212 22:04:43.659138   17479 system_pods.go:89] "tiller-deploy-7b677967b9-8vj4p" [1ba59c52-d351-4cfa-8c97-733b952603c2] Running
	I1212 22:04:43.659149   17479 system_pods.go:126] duration metric: took 11.169202ms to wait for k8s-apps to be running ...
	I1212 22:04:43.659161   17479 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:04:43.659208   17479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:04:43.721826   17479 system_svc.go:56] duration metric: took 62.644305ms WaitForService to wait for kubelet.
	I1212 22:04:43.721862   17479 kubeadm.go:581] duration metric: took 59.41257317s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:04:43.721889   17479 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:04:43.724980   17479 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:04:43.725011   17479 node_conditions.go:123] node cpu capacity is 8
	I1212 22:04:43.725024   17479 node_conditions.go:105] duration metric: took 3.12891ms to run NodePressure ...
	I1212 22:04:43.725038   17479 start.go:228] waiting for startup goroutines ...
	I1212 22:04:43.876377   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.137458   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.139055   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.140776   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.375959   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:44.637059   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:44.639509   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:44.641841   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:44.876054   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.137334   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.139662   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.140549   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.375821   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:45.638224   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:45.638655   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:45.641153   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:45.875811   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.138015   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.139083   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.140345   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.375344   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:46.636846   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:46.638206   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:46.639688   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:46.876486   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.137482   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.138442   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 22:04:47.140285   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.375133   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:47.636371   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:47.638962   17479 kapi.go:107] duration metric: took 57.014360615s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 22:04:47.639506   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:47.875466   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.137628   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.140267   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.375561   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:48.637624   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:48.641072   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:48.875512   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.137622   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.140073   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.375251   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:49.636886   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:49.639869   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:49.875740   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.137521   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.140757   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.375779   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:50.637191   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:50.640064   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:50.875795   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.137279   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.140411   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.375205   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:51.636912   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:51.639463   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:51.875832   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.137560   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.140418   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.375617   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:52.637944   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:52.640117   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:52.876058   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.136794   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.140562   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.375082   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:53.636476   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:53.639065   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:53.927582   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 22:04:54.137446   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.142624   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:54.421464   17479 kapi.go:107] duration metric: took 1m2.056021595s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 22:04:54.423868   17479 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-818905 cluster.
	I1212 22:04:54.425712   17479 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 22:04:54.427896   17479 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 22:04:54.637221   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:54.640883   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.139612   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.143128   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:55.637825   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:55.640787   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.136571   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.140139   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:56.636915   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:56.639773   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.137091   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.139803   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:57.638159   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:57.640544   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.137083   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.139825   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:58.636833   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:58.639946   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.136879   17479 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 22:04:59.139575   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:04:59.680855   17479 kapi.go:107] duration metric: took 1m9.059568635s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 22:04:59.681171   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.141052   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:00.640584   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.140902   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:01.640120   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.141012   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:02.640095   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.140557   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:03.640476   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.139667   17479 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 22:05:04.640049   17479 kapi.go:107] duration metric: took 1m13.013782907s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 22:05:04.641983   17479 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, helm-tiller, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1212 22:05:04.643476   17479 addons.go:502] enable addons completed in 1m20.401697082s: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner metrics-server helm-tiller storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1212 22:05:04.643517   17479 start.go:233] waiting for cluster config update ...
	I1212 22:05:04.643533   17479 start.go:242] writing updated cluster config ...
	I1212 22:05:04.643789   17479 ssh_runner.go:195] Run: rm -f paused
	I1212 22:05:04.691005   17479 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:05:04.692931   17479 out.go:177] * Done! kubectl is now configured to use "addons-818905" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.224734130Z" level=info msg="Stopping pod sandbox: 7d61288a3d85bec2e5560581752482d848dd88bfe45632046a20adcc469dd5e3" id=6734a54b-d51b-4572-936f-23db9971c325 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.224983353Z" level=info msg="Got pod network &{Name:registry-5g6k8 Namespace:kube-system ID:7d61288a3d85bec2e5560581752482d848dd88bfe45632046a20adcc469dd5e3 UID:fc7ebc27-babc-48dc-928d-1b1782ea01ea NetNS:/var/run/netns/c3637aba-c01c-4acd-a9ae-a8d543835556 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.225147773Z" level=info msg="Deleting pod kube-system_registry-5g6k8 from CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.254189643Z" level=info msg="Stopped container 5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6: kube-system/registry-proxy-9wc4f/registry-proxy" id=85828c1b-28b7-4ae8-88aa-9e9dae777e20 name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.254753403Z" level=info msg="Stopping pod sandbox: 8b641ddfc80ed9b4abcd74b2b1203814cc22bbcf873883cc21ccb0c580943d6e" id=f1cde4a9-7613-4985-825a-4f594dd2a900 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.257901891Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YZSCS4WDKKGZO7QN - [0:0]\n:KUBE-HP-Q4PWHY5OEFHRUBNV - [0:0]\n:KUBE-HP-BLIY6U67K3DNCGRI - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-4729-a129-efb759861c19_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-Q4PWHY5OEFHRUBNV\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-4729-a129-efb759861c19_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-YZSCS4WDKKGZO7QN\n-A KUBE-HP-Q4PWHY5OEFHRUBNV -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-4729-a129-efb759861c19_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-Q4PWHY5OEFHRUBNV -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-472
9-a129-efb759861c19_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-A KUBE-HP-YZSCS4WDKKGZO7QN -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-4729-a129-efb759861c19_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-YZSCS4WDKKGZO7QN -p tcp -m comment --comment \"k8s_ingress-nginx-controller-7c6974c4d8-wchx7_ingress-nginx_31a710c2-33fa-4729-a129-efb759861c19_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-X KUBE-HP-BLIY6U67K3DNCGRI\nCOMMIT\n"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.260034012Z" level=info msg="Closing host port tcp:5000"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.261386986Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.261542481Z" level=info msg="Got pod network &{Name:registry-proxy-9wc4f Namespace:kube-system ID:8b641ddfc80ed9b4abcd74b2b1203814cc22bbcf873883cc21ccb0c580943d6e UID:957915c2-6516-426f-b900-6143af5f0982 NetNS:/var/run/netns/89f8ca0f-cbb4-40ff-b818-6bebec6beeca Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.261654684Z" level=info msg="Deleting pod kube-system_registry-proxy-9wc4f from CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.264888986Z" level=info msg="Stopped pod sandbox: 7d61288a3d85bec2e5560581752482d848dd88bfe45632046a20adcc469dd5e3" id=6734a54b-d51b-4572-936f-23db9971c325 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.293020868Z" level=info msg="Stopped pod sandbox: 8b641ddfc80ed9b4abcd74b2b1203814cc22bbcf873883cc21ccb0c580943d6e" id=f1cde4a9-7613-4985-825a-4f594dd2a900 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.350262754Z" level=info msg="Removing container: 5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6" id=50cada12-1820-489a-a177-a2e0bc97350f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.367103349Z" level=info msg="Removed container 5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6: kube-system/registry-proxy-9wc4f/registry-proxy" id=50cada12-1820-489a-a177-a2e0bc97350f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.368997496Z" level=info msg="Removing container: 48d8fedfd59b8396d5279625c9514b6a126bb90b045d249fab82eda64dc67783" id=8e0e9f20-d9f7-4e10-baa5-11a9bf2a40eb name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.387771151Z" level=info msg="Removed container 48d8fedfd59b8396d5279625c9514b6a126bb90b045d249fab82eda64dc67783: default/registry-test/registry-test" id=8e0e9f20-d9f7-4e10-baa5-11a9bf2a40eb name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.388957575Z" level=info msg="Removing container: 4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891" id=f95a81a8-07f9-4cf4-8e4a-a336d1943dc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.427168250Z" level=info msg="Removed container 4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891: kube-system/registry-5g6k8/registry" id=f95a81a8-07f9-4cf4-8e4a-a336d1943dc2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.616511092Z" level=info msg="Stopped container 4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51: kube-system/metrics-server-7c66d45ddc-xt6xh/metrics-server" id=4999d454-436f-406c-a2d4-d675503a0f8b name=/runtime.v1.RuntimeService/StopContainer
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.617008674Z" level=info msg="Stopping pod sandbox: 58e232a6433f8538c4280ae6c16e88692d6f594605eb835d9a39b351e0806a10" id=f8d26d10-651f-4df8-8976-33057f9987f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.617192896Z" level=info msg="Got pod network &{Name:metrics-server-7c66d45ddc-xt6xh Namespace:kube-system ID:58e232a6433f8538c4280ae6c16e88692d6f594605eb835d9a39b351e0806a10 UID:37df71e4-7ba7-496c-b885-921e393df60e NetNS:/var/run/netns/6d3b4be9-84d4-46f7-8f65-b222548fc4db Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.617302845Z" level=info msg="Deleting pod kube-system_metrics-server-7c66d45ddc-xt6xh from CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:05:20 addons-818905 crio[951]: time="2023-12-12 22:05:20.649023482Z" level=info msg="Stopped pod sandbox: 58e232a6433f8538c4280ae6c16e88692d6f594605eb835d9a39b351e0806a10" id=f8d26d10-651f-4df8-8976-33057f9987f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 22:05:21 addons-818905 crio[951]: time="2023-12-12 22:05:21.358466949Z" level=info msg="Removing container: 4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51" id=5a7d1f7c-9924-479c-a9a7-5044013ada20 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 22:05:21 addons-818905 crio[951]: time="2023-12-12 22:05:21.372686382Z" level=info msg="Removed container 4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51: kube-system/metrics-server-7c66d45ddc-xt6xh/metrics-server" id=5a7d1f7c-9924-479c-a9a7-5044013ada20 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	16eacd69ae649       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             3 seconds ago        Exited              helper-pod                               0                   9fe5cd494580b       helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368
	fc82c5c6bc667       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                              4 seconds ago        Running             nginx                                    0                   84c21280efaa0       nginx
	4f63c03a7ed74       docker.io/library/busybox@sha256:1780cb47b7dfbcbf1e511be1cdb62722bd0ce208b996ea199689b56892e15af9                                            7 seconds ago        Exited              busybox                                  0                   3e1aba28eb2b4       test-local-path
	102ebd2482dc3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 seconds ago       Running             csi-snapshotter                          0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	44306b9ab3c7f       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago       Running             csi-provisioner                          0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	8616631f0021f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            20 seconds ago       Running             liveness-probe                           0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	630e6f156278b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           21 seconds ago       Running             hostpath                                 0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	a80511f2cc3ed       registry.k8s.io/ingress-nginx/controller@sha256:0115d7e01987c13e1be90b09c223c3e0d8e9a92e97c0421e712ad3577e2d78e5                             22 seconds ago       Running             controller                               0                   9511d01077962       ingress-nginx-controller-7c6974c4d8-wchx7
	b1ee5925a9708       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 28 seconds ago       Running             gcp-auth                                 0                   fdc14f8613b34       gcp-auth-d4c87556c-bs24v
	4a3638def872d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:2c4859cacbc95d19331bdb9eaedf709c7d2655a04a74c4e93acc2e263e31b1ce                            29 seconds ago       Exited              gadget                                   3                   7f2f0be93b840       gadget-69w46
	31a7de257dd78       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                30 seconds ago       Running             node-driver-registrar                    0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	18966951754e6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   31 seconds ago       Exited              patch                                    0                   a85ca2c1a0495       ingress-nginx-admission-patch-vwf7q
	03c0cd331ca85       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   31 seconds ago       Exited              patch                                    0                   538ccd59f6647       gcp-auth-certs-patch-ccvnc
	7ebddad8dccfe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   31 seconds ago       Exited              create                                   0                   b7416313abd7c       gcp-auth-certs-create-tg99p
	22d4b7d1a687e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             31 seconds ago       Running             csi-attacher                             0                   96abdd0b1ddaf       csi-hostpath-attacher-0
	3826e4fb13cb6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   32 seconds ago       Exited              create                                   0                   8e36add1cbdd5       ingress-nginx-admission-create-d89bq
	1f8c65b8e3be5       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   1904c3b023ff8       snapshot-controller-58dbcc7b99-csqbk
	2ade154fdf461       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   34 seconds ago       Running             csi-external-health-monitor-controller   0                   7fe6b30094e0c       csi-hostpathplugin-lstwr
	ed8787083a833       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             45 seconds ago       Running             minikube-ingress-dns                     0                   b7774c539a21a       kube-ingress-dns-minikube
	0825ed198014f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             49 seconds ago       Running             local-path-provisioner                   0                   e82816abf89fb       local-path-provisioner-78b46b4d5c-j72hc
	fe69091ade7e3       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              52 seconds ago       Running             csi-resizer                              0                   0526ab15a7c20       csi-hostpath-resizer-0
	82904c161d859       gcr.io/cloud-spanner-emulator/emulator@sha256:390e0daaf0631b9a67b7826ef740224ad6437739bbe4b06ebde5719cd39c903f                               54 seconds ago       Running             cloud-spanner-emulator                   0                   dccd4497f8b8c       cloud-spanner-emulator-5649c69bf6-q5ss5
	f383a794043d8       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      56 seconds ago       Running             volume-snapshot-controller               0                   0b4730807aa37       snapshot-controller-58dbcc7b99-td5gh
	52293c0b2f439       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   58a6669777cda       storage-provisioner
	ed348533ef672       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   68ea693663b33       coredns-5dd5756b68-h4rvx
	2ef41bd4e9451       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                                             About a minute ago   Running             kindnet-cni                              0                   0ec88a16ccdd2       kindnet-m2vln
	81c775386ee4e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             About a minute ago   Running             kube-proxy                               0                   868c55f7c3d1b       kube-proxy-bl7tf
	0b87a62984772       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             About a minute ago   Running             kube-scheduler                           0                   993c5a1d8bc2c       kube-scheduler-addons-818905
	1f13b146792de       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             About a minute ago   Running             kube-controller-manager                  0                   ab505411af167       kube-controller-manager-addons-818905
	dda8cb4e3c38a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             About a minute ago   Running             kube-apiserver                           0                   4be224fec035c       kube-apiserver-addons-818905
	1e20b73cc258a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             About a minute ago   Running             etcd                                     0                   6487d343c9a2a       etcd-addons-818905
	
	* 
	* ==> coredns [ed348533ef6724d77f5ca2aa99bab9b642233855afff876503ba91cad2533060] <==
	* [INFO] 10.244.0.10:56436 - 55134 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060718s
	[INFO] 10.244.0.10:57912 - 56730 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003870434s
	[INFO] 10.244.0.10:57912 - 63901 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005013972s
	[INFO] 10.244.0.10:33431 - 40115 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005555908s
	[INFO] 10.244.0.10:33431 - 36022 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007348376s
	[INFO] 10.244.0.10:53568 - 55135 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003727393s
	[INFO] 10.244.0.10:53568 - 33362 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006365233s
	[INFO] 10.244.0.10:45949 - 28566 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061605s
	[INFO] 10.244.0.10:45949 - 29587 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124965s
	[INFO] 10.244.0.20:44603 - 16220 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151903s
	[INFO] 10.244.0.20:39184 - 23548 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207324s
	[INFO] 10.244.0.20:57548 - 56809 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116463s
	[INFO] 10.244.0.20:45647 - 25628 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180314s
	[INFO] 10.244.0.20:33214 - 47482 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008297s
	[INFO] 10.244.0.20:45668 - 28773 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121389s
	[INFO] 10.244.0.20:32789 - 41279 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004616765s
	[INFO] 10.244.0.20:46201 - 31006 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006725551s
	[INFO] 10.244.0.20:49993 - 49805 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006223193s
	[INFO] 10.244.0.20:47658 - 35516 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006279737s
	[INFO] 10.244.0.20:38759 - 31081 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006203527s
	[INFO] 10.244.0.20:56248 - 31689 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006832427s
	[INFO] 10.244.0.20:55484 - 14845 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000634302s
	[INFO] 10.244.0.20:40729 - 29019 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000779895s
	[INFO] 10.244.0.25:59269 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022415s
	[INFO] 10.244.0.25:56475 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123932s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-818905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-818905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=addons-818905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_03_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-818905
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-818905"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-818905
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:05:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:05:03 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:05:03 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:05:03 +0000   Tue, 12 Dec 2023 22:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:05:03 +0000   Tue, 12 Dec 2023 22:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-818905
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec60f86b068a4c23b171320d45849218
	  System UUID:                e08a4ec2-a8bc-4866-875b-4a1a708b9c93
	  Boot ID:                    e32ab69d-45ad-4e0a-b786-ce498c8395cb
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-q5ss5      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  gadget                      gadget-69w46                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  gcp-auth                    gcp-auth-d4c87556c-bs24v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-wchx7    100m (1%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         91s
	  kube-system                 coredns-5dd5756b68-h4rvx                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     96s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 csi-hostpathplugin-lstwr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 etcd-addons-818905                           100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-m2vln                                100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      97s
	  kube-system                 kube-apiserver-addons-818905                 250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-addons-818905        200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-bl7tf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-addons-818905                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 snapshot-controller-58dbcc7b99-csqbk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 snapshot-controller-58dbcc7b99-td5gh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  local-path-storage          local-path-provisioner-78b46b4d5c-j72hc      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             310Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 92s   kube-proxy       
	  Normal  Starting                 110s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s  kubelet          Node addons-818905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s  kubelet          Node addons-818905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s  kubelet          Node addons-818905 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           98s   node-controller  Node addons-818905 event: Registered Node addons-818905 in Controller
	  Normal  NodeReady                63s   kubelet          Node addons-818905 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec12 21:17]  #2
	[  +0.001521]  #3
	[  +0.000000]  #4
	[  +0.003210] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001818] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001363] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.004120]  #5
	[  +0.000663]  #6
	[  +0.000874]  #7
	[  +0.069659] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.523614] i8042: Warning: Keylock active
	[  +0.007605] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003007] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000627] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000623] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000623] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000674] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 8
	[ +10.028692] kauditd_printk_skb: 36 callbacks suppressed
	
	* 
	* ==> etcd [1e20b73cc258aff4698a8b9e41ceb38f17ab5d0081fe7bf33bc7e4ccfaae1d58] <==
	* {"level":"info","ts":"2023-12-12T22:03:47.733075Z","caller":"traceutil/trace.go:171","msg":"trace[1892750166] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"397.220366ms","start":"2023-12-12T22:03:47.335842Z","end":"2023-12-12T22:03:47.733063Z","steps":["trace[1892750166] 'process raft request'  (duration: 186.0367ms)","trace[1892750166] 'compare'  (duration: 210.413349ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:03:47.73369Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-12T22:03:47.232776Z","time spent":"500.847467ms","remote":"127.0.0.1:42534","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" mod_revision:368 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" value_size:3722 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" > >"}
	{"level":"info","ts":"2023-12-12T22:03:47.733158Z","caller":"traceutil/trace.go:171","msg":"trace[227964355] linearizableReadLoop","detail":"{readStateIndex:389; appliedIndex:387; }","duration":"296.420132ms","start":"2023-12-12T22:03:47.436728Z","end":"2023-12-12T22:03:47.733149Z","steps":["trace[227964355] 'read index received'  (duration: 85.158399ms)","trace[227964355] 'applied index is now lower than readState.Index'  (duration: 211.259587ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:03:47.733205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.484736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:03:47.733966Z","caller":"traceutil/trace.go:171","msg":"trace[1146703045] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:380; }","duration":"297.250166ms","start":"2023-12-12T22:03:47.436706Z","end":"2023-12-12T22:03:47.733956Z","steps":["trace[1146703045] 'agreement among raft nodes before linearized reading'  (duration: 296.466959ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:03:47.734118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.823686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-818905\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-12-12T22:03:47.734377Z","caller":"traceutil/trace.go:171","msg":"trace[249540473] range","detail":"{range_begin:/registry/minions/addons-818905; range_end:; response_count:1; response_revision:380; }","duration":"215.085594ms","start":"2023-12-12T22:03:47.519284Z","end":"2023-12-12T22:03:47.73437Z","steps":["trace[249540473] 'agreement among raft nodes before linearized reading'  (duration: 214.800282ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:47.7333Z","caller":"traceutil/trace.go:171","msg":"trace[1974505543] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"296.7175ms","start":"2023-12-12T22:03:47.436575Z","end":"2023-12-12T22:03:47.733293Z","steps":["trace[1974505543] 'process raft request'  (duration: 296.025783ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:47.733368Z","caller":"traceutil/trace.go:171","msg":"trace[1907912190] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"213.916451ms","start":"2023-12-12T22:03:47.519444Z","end":"2023-12-12T22:03:47.733361Z","steps":["trace[1907912190] 'process raft request'  (duration: 213.21302ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:47.733489Z","caller":"traceutil/trace.go:171","msg":"trace[164526594] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"211.683311ms","start":"2023-12-12T22:03:47.521799Z","end":"2023-12-12T22:03:47.733483Z","steps":["trace[164526594] 'process raft request'  (duration: 210.886347ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:03:47.734259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.294136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:03:47.735049Z","caller":"traceutil/trace.go:171","msg":"trace[175603115] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:380; }","duration":"114.082128ms","start":"2023-12-12T22:03:47.620958Z","end":"2023-12-12T22:03:47.73504Z","steps":["trace[175603115] 'agreement among raft nodes before linearized reading'  (duration: 113.280574ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-12T22:03:47.734341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.187669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-12T22:03:47.736382Z","caller":"traceutil/trace.go:171","msg":"trace[1780352886] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:380; }","duration":"204.222619ms","start":"2023-12-12T22:03:47.532148Z","end":"2023-12-12T22:03:47.736371Z","steps":["trace[1780352886] 'agreement among raft nodes before linearized reading'  (duration: 202.177765ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:03:48.529606Z","caller":"traceutil/trace.go:171","msg":"trace[1062574331] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"104.410935ms","start":"2023-12-12T22:03:48.425178Z","end":"2023-12-12T22:03:48.529589Z","steps":["trace[1062574331] 'process raft request'  (duration: 104.295112ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:28.214482Z","caller":"traceutil/trace.go:171","msg":"trace[901079705] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:961; }","duration":"133.32295ms","start":"2023-12-12T22:04:28.081138Z","end":"2023-12-12T22:04:28.214461Z","steps":["trace[901079705] 'read index received'  (duration: 69.579218ms)","trace[901079705] 'applied index is now lower than readState.Index'  (duration: 63.743099ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:04:28.214571Z","caller":"traceutil/trace.go:171","msg":"trace[849702681] transaction","detail":"{read_only:false; response_revision:937; number_of_response:1; }","duration":"142.913176ms","start":"2023-12-12T22:04:28.071632Z","end":"2023-12-12T22:04:28.214546Z","steps":["trace[849702681] 'process raft request'  (duration: 79.128034ms)","trace[849702681] 'compare'  (duration: 63.60257ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-12T22:04:28.214639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.497458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5736"}
	{"level":"info","ts":"2023-12-12T22:04:28.214667Z","caller":"traceutil/trace.go:171","msg":"trace[2136373322] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:937; }","duration":"133.542877ms","start":"2023-12-12T22:04:28.081116Z","end":"2023-12-12T22:04:28.214658Z","steps":["trace[2136373322] 'agreement among raft nodes before linearized reading'  (duration: 133.465607ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466303Z","caller":"traceutil/trace.go:171","msg":"trace[1713887354] transaction","detail":"{read_only:false; response_revision:1113; number_of_response:1; }","duration":"120.793689ms","start":"2023-12-12T22:04:59.345494Z","end":"2023-12-12T22:04:59.466288Z","steps":["trace[1713887354] 'process raft request'  (duration: 29.33325ms)","trace[1713887354] 'compare'  (duration: 91.054842ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:04:59.466484Z","caller":"traceutil/trace.go:171","msg":"trace[1918326014] transaction","detail":"{read_only:false; response_revision:1114; number_of_response:1; }","duration":"120.047992ms","start":"2023-12-12T22:04:59.346427Z","end":"2023-12-12T22:04:59.466475Z","steps":["trace[1918326014] 'process raft request'  (duration: 119.544939ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466551Z","caller":"traceutil/trace.go:171","msg":"trace[1204744217] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"119.798932ms","start":"2023-12-12T22:04:59.346747Z","end":"2023-12-12T22:04:59.466546Z","steps":["trace[1204744217] 'process raft request'  (duration: 119.259161ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.466605Z","caller":"traceutil/trace.go:171","msg":"trace[483315355] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"119.685654ms","start":"2023-12-12T22:04:59.346914Z","end":"2023-12-12T22:04:59.4666Z","steps":["trace[483315355] 'process raft request'  (duration: 119.112089ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-12T22:04:59.677923Z","caller":"traceutil/trace.go:171","msg":"trace[173774252] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"144.00529ms","start":"2023-12-12T22:04:59.533894Z","end":"2023-12-12T22:04:59.677899Z","steps":["trace[173774252] 'process raft request'  (duration: 58.190446ms)","trace[173774252] 'compare'  (duration: 85.597841ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T22:05:16.840896Z","caller":"traceutil/trace.go:171","msg":"trace[110425836] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"120.48092ms","start":"2023-12-12T22:05:16.720394Z","end":"2023-12-12T22:05:16.840875Z","steps":["trace[110425836] 'process raft request'  (duration: 120.311733ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b1ee5925a9708fdabe6245a489897046f22f720a6e22d75ae27de76f71464a64] <==
	* 2023/12/12 22:04:53 GCP Auth Webhook started!
	2023/12/12 22:05:05 Ready to marshal response ...
	2023/12/12 22:05:05 Ready to write response ...
	2023/12/12 22:05:05 Ready to marshal response ...
	2023/12/12 22:05:05 Ready to write response ...
	2023/12/12 22:05:09 Ready to marshal response ...
	2023/12/12 22:05:09 Ready to write response ...
	2023/12/12 22:05:10 Ready to marshal response ...
	2023/12/12 22:05:10 Ready to write response ...
	2023/12/12 22:05:14 Ready to marshal response ...
	2023/12/12 22:05:14 Ready to write response ...
	2023/12/12 22:05:17 Ready to marshal response ...
	2023/12/12 22:05:17 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:05:21 up 47 min,  0 users,  load average: 1.97, 1.34, 0.57
	Linux addons-818905 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2ef41bd4e945194bda0c923e875f5cc6e9ad3681034340b43547a78a7b19508b] <==
	* I1212 22:03:46.943186       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 22:03:46.943239       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1212 22:03:46.943332       1 main.go:116] setting mtu 1500 for CNI 
	I1212 22:03:46.943344       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 22:03:46.943359       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 22:04:18.063726       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1212 22:04:18.070862       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:04:18.070886       1 main.go:227] handling current node
	I1212 22:04:28.216244       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:04:28.216275       1 main.go:227] handling current node
	I1212 22:04:38.227395       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:04:38.227425       1 main.go:227] handling current node
	I1212 22:04:48.230875       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:04:48.230897       1 main.go:227] handling current node
	I1212 22:04:58.242770       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:04:58.242799       1 main.go:227] handling current node
	I1212 22:05:08.254456       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:05:08.254477       1 main.go:227] handling current node
	I1212 22:05:18.262701       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:05:18.262724       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [dda8cb4e3c38a9ebece1cf15ee0faa7a8c2c02673b2f6118723d39e15e6a8327] <==
	* W1212 22:03:50.954828       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 22:03:51.422031       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.85.174"}
	I1212 22:03:51.437503       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I1212 22:03:51.551324       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.97.124.146"}
	W1212 22:03:52.066072       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1212 22:03:52.301052       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.142.192"}
	W1212 22:04:18.271983       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	E1212 22:04:18.272110       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	W1212 22:04:18.272145       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	E1212 22:04:18.272167       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	W1212 22:04:18.328537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	E1212 22:04:18.328574       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.142.192:443: connect: connection refused
	I1212 22:04:28.765078       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 22:04:31.101237       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.16.227:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.16.227:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.16.227:443: connect: connection refused
	W1212 22:04:31.101336       1 handler_proxy.go:93] no RequestInfo found in the context
	E1212 22:04:31.101392       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1212 22:04:31.101604       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.16.227:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.16.227:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.16.227:443: connect: connection refused
	I1212 22:04:31.125914       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 22:04:31.132075       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1212 22:05:10.589120       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 22:05:10.919792       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.79.104"}
	E1212 22:05:19.414141       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc00b851ad0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc00ecf1c70), ResponseWriter:(*httpsnoop.rw)(0xc00ecf1c70), Flusher:(*httpsnoop.rw)(0xc00ecf1c70), CloseNotifier:(*httpsnoop.rw)(0xc00ecf1c70), Pusher:(*httpsnoop.rw)(0xc00ecf1c70)}}, encoder:(*versioning.codec)(0xc00ef6fd60), memAllocator:(*runtime.Allocator)(0xc006674468)})
	
	* 
	* ==> kube-controller-manager [1f13b146792deabef40ac99f7f50af053a8c4c7cecbac97273b118a0abd78e17] <==
	* I1212 22:04:52.342210       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1212 22:04:53.196306       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1212 22:04:53.229839       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1212 22:04:53.234307       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:04:53.235925       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1212 22:04:53.240111       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I1212 22:04:53.240358       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1212 22:04:53.255206       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:04:53.264972       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:04:53.316350       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1212 22:04:53.316554       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I1212 22:04:54.418449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="87.158448ms"
	I1212 22:04:54.418704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="137.542µs"
	I1212 22:04:59.343850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="98.783µs"
	I1212 22:05:03.178089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.285898ms"
	I1212 22:05:03.178198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="68.972µs"
	I1212 22:05:05.326600       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1212 22:05:05.606566       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:05:05.606611       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1212 22:05:10.408237       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="10.319933ms"
	I1212 22:05:10.408531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="110.682µs"
	I1212 22:05:13.857871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="8.147µs"
	I1212 22:05:17.955840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="8.017µs"
	I1212 22:05:19.460589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="10.218µs"
	I1212 22:05:20.079388       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="13.642µs"
	
	* 
	* ==> kube-proxy [81c775386ee4e49e512b6a390226f967bd48c00a46b35d0a4bfd4550f6ed4ce5] <==
	* I1212 22:03:48.422048       1 server_others.go:69] "Using iptables proxy"
	I1212 22:03:48.625429       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1212 22:03:49.024672       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 22:03:49.038681       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:03:49.038786       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 22:03:49.038821       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 22:03:49.038882       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:03:49.039140       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:03:49.039368       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:03:49.117871       1 config.go:188] "Starting service config controller"
	I1212 22:03:49.118905       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:03:49.118314       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:03:49.118996       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:03:49.118674       1 config.go:315] "Starting node config controller"
	I1212 22:03:49.119026       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:03:49.220409       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:03:49.220569       1 shared_informer.go:318] Caches are synced for node config
	I1212 22:03:49.220618       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0b87a62984772ddfcc3c284095b2e409611917f578db34c8a1500faf022b12f2] <==
	* E1212 22:03:28.923322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:03:28.923328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:28.923458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:28.923484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:03:28.923490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:28.923501       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:03:28.923509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:03:28.923516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:03:28.923460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:03:28.923565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:03:28.923631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:03:28.923700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:03:28.923638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:03:28.927275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:03:29.829725       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:03:29.829756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 22:03:29.854028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:03:29.854068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:03:29.879308       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:03:29.879340       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 22:03:29.879452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:03:29.879475       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:03:29.886724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:03:29.886749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1212 22:03:32.020013       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.367843    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6"} err="failed to get container status \"5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6\": rpc error: code = NotFound desc = could not find container \"5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6\": container with ID starting with 5fe5a7281a6de87c3d09ad902ccc423833636fa83b1373bb1e516a4d5e8c9aa6 not found: ID does not exist"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.367862    1550 scope.go:117] "RemoveContainer" containerID="48d8fedfd59b8396d5279625c9514b6a126bb90b045d249fab82eda64dc67783"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.388003    1550 scope.go:117] "RemoveContainer" containerID="4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.390252    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn7d8\" (UniqueName: \"kubernetes.io/projected/fc7ebc27-babc-48dc-928d-1b1782ea01ea-kube-api-access-cn7d8\") pod \"fc7ebc27-babc-48dc-928d-1b1782ea01ea\" (UID: \"fc7ebc27-babc-48dc-928d-1b1782ea01ea\") "
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.390316    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wdm7\" (UniqueName: \"kubernetes.io/projected/957915c2-6516-426f-b900-6143af5f0982-kube-api-access-9wdm7\") pod \"957915c2-6516-426f-b900-6143af5f0982\" (UID: \"957915c2-6516-426f-b900-6143af5f0982\") "
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.392187    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7ebc27-babc-48dc-928d-1b1782ea01ea-kube-api-access-cn7d8" (OuterVolumeSpecName: "kube-api-access-cn7d8") pod "fc7ebc27-babc-48dc-928d-1b1782ea01ea" (UID: "fc7ebc27-babc-48dc-928d-1b1782ea01ea"). InnerVolumeSpecName "kube-api-access-cn7d8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.392282    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957915c2-6516-426f-b900-6143af5f0982-kube-api-access-9wdm7" (OuterVolumeSpecName: "kube-api-access-9wdm7") pod "957915c2-6516-426f-b900-6143af5f0982" (UID: "957915c2-6516-426f-b900-6143af5f0982"). InnerVolumeSpecName "kube-api-access-9wdm7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.427420    1550 scope.go:117] "RemoveContainer" containerID="4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: E1212 22:05:20.427849    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891\": container with ID starting with 4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891 not found: ID does not exist" containerID="4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.427895    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891"} err="failed to get container status \"4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891\": rpc error: code = NotFound desc = could not find container \"4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891\": container with ID starting with 4309fad84bba203064f39d22dcefcd00e804db786057d1d575687e27df028891 not found: ID does not exist"
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.490690    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cn7d8\" (UniqueName: \"kubernetes.io/projected/fc7ebc27-babc-48dc-928d-1b1782ea01ea-kube-api-access-cn7d8\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.490742    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9wdm7\" (UniqueName: \"kubernetes.io/projected/957915c2-6516-426f-b900-6143af5f0982-kube-api-access-9wdm7\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.792930    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/37df71e4-7ba7-496c-b885-921e393df60e-tmp-dir\") pod \"37df71e4-7ba7-496c-b885-921e393df60e\" (UID: \"37df71e4-7ba7-496c-b885-921e393df60e\") "
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.793028    1550 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv9c5\" (UniqueName: \"kubernetes.io/projected/37df71e4-7ba7-496c-b885-921e393df60e-kube-api-access-hv9c5\") pod \"37df71e4-7ba7-496c-b885-921e393df60e\" (UID: \"37df71e4-7ba7-496c-b885-921e393df60e\") "
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.793173    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37df71e4-7ba7-496c-b885-921e393df60e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "37df71e4-7ba7-496c-b885-921e393df60e" (UID: "37df71e4-7ba7-496c-b885-921e393df60e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.794710    1550 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37df71e4-7ba7-496c-b885-921e393df60e-kube-api-access-hv9c5" (OuterVolumeSpecName: "kube-api-access-hv9c5") pod "37df71e4-7ba7-496c-b885-921e393df60e" (UID: "37df71e4-7ba7-496c-b885-921e393df60e"). InnerVolumeSpecName "kube-api-access-hv9c5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.893791    1550 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hv9c5\" (UniqueName: \"kubernetes.io/projected/37df71e4-7ba7-496c-b885-921e393df60e-kube-api-access-hv9c5\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:05:20 addons-818905 kubelet[1550]: I1212 22:05:20.893839    1550 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/37df71e4-7ba7-496c-b885-921e393df60e-tmp-dir\") on node \"addons-818905\" DevicePath \"\""
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.357333    1550 scope.go:117] "RemoveContainer" containerID="4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.372882    1550 scope.go:117] "RemoveContainer" containerID="4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: E1212 22:05:21.373239    1550 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51\": container with ID starting with 4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51 not found: ID does not exist" containerID="4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.373292    1550 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51"} err="failed to get container status \"4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51\": rpc error: code = NotFound desc = could not find container \"4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51\": container with ID starting with 4caad557cacb8812ddf15d7644a3fe921f33a1d36876fc2e8800052b929eeb51 not found: ID does not exist"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.726697    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="37df71e4-7ba7-496c-b885-921e393df60e" path="/var/lib/kubelet/pods/37df71e4-7ba7-496c-b885-921e393df60e/volumes"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.727207    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="957915c2-6516-426f-b900-6143af5f0982" path="/var/lib/kubelet/pods/957915c2-6516-426f-b900-6143af5f0982/volumes"
	Dec 12 22:05:21 addons-818905 kubelet[1550]: I1212 22:05:21.727831    1550 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fc7ebc27-babc-48dc-928d-1b1782ea01ea" path="/var/lib/kubelet/pods/fc7ebc27-babc-48dc-928d-1b1782ea01ea/volumes"
	
	* 
	* ==> storage-provisioner [52293c0b2f4394891addc8c5d0199b176124d008bb75df9c85893953bd9907ca] <==
	* I1212 22:04:19.272211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:04:19.279205       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:04:19.279248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:04:19.284887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:04:19.284999       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa!
	I1212 22:04:19.285029       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6ac0969-5fb7-4455-9f60-f7ef31f5ec2e", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa became leader
	I1212 22:04:19.385812       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-818905_e6d50b75-d051-4c5f-a2a2-1faa5aa8f0aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-818905 -n addons-818905
helpers_test.go:261: (dbg) Run:  kubectl --context addons-818905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gcp-auth-certs-patch-ccvnc ingress-nginx-admission-create-d89bq ingress-nginx-admission-patch-vwf7q helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-818905 describe pod gcp-auth-certs-patch-ccvnc ingress-nginx-admission-create-d89bq ingress-nginx-admission-patch-vwf7q helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-818905 describe pod gcp-auth-certs-patch-ccvnc ingress-nginx-admission-create-d89bq ingress-nginx-admission-patch-vwf7q helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368: exit status 1 (58.781944ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-ccvnc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-d89bq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vwf7q" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-818905 describe pod gcp-auth-certs-patch-ccvnc ingress-nginx-admission-create-d89bq ingress-nginx-admission-patch-vwf7q helper-pod-delete-pvc-63257cc8-df89-4d4d-9324-970810f80368: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (2.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (178.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-036387 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-036387 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.275465026s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-036387 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-036387 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9a2e4b31-6feb-4c76-9474-a1e547a49f45] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9a2e4b31-6feb-4c76-9474-a1e547a49f45] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.007804025s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1212 22:15:04.711047   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:15:32.396072   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-036387 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.40650331s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-036387 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.018074859s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons disable ingress --alsologtostderr -v=1
E1212 22:16:24.302072   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.307349   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.317575   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.337842   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.378126   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.458418   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.618827   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:24.939428   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:25.580340   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:26.861541   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:29.422397   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons disable ingress --alsologtostderr -v=1: (7.409010832s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-036387
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-036387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888",
	        "Created": "2023-12-12T22:12:27.9052501Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:12:28.202417985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888/hostname",
	        "HostsPath": "/var/lib/docker/containers/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888/hosts",
	        "LogPath": "/var/lib/docker/containers/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888-json.log",
	        "Name": "/ingress-addon-legacy-036387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-036387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-036387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c508975fc8ab5cac8802761ff81521e1ee3ca64e59f22dcfbfc54d9b28f54f71-init/diff:/var/lib/docker/overlay2/315943c5fbce6bf5205163f366377908e1fa1e507321eff7fb62256fbf325087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c508975fc8ab5cac8802761ff81521e1ee3ca64e59f22dcfbfc54d9b28f54f71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c508975fc8ab5cac8802761ff81521e1ee3ca64e59f22dcfbfc54d9b28f54f71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c508975fc8ab5cac8802761ff81521e1ee3ca64e59f22dcfbfc54d9b28f54f71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-036387",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-036387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-036387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-036387",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-036387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "532e42eb9851d593a5f8e324720ff2e4dbf8ac53cde3b171226cd3106f27f3e5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/532e42eb9851",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-036387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "157c5d141cba",
	                        "ingress-addon-legacy-036387"
	                    ],
	                    "NetworkID": "c2213d1b724dd79cff15f824f6cd07e0b4499b5e32e252fad5abda5b75c4cdb1",
	                    "EndpointID": "96affb89fb2c9d314104d56a31f3152b981180a59f4ca0541db3678cf10a3f47",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-036387 -n ingress-addon-legacy-036387
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-036387 logs -n 25: (1.038612858s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-355715 ssh sudo cat                                               | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | /etc/test/nested/copy/16399/hosts                                            |                             |         |         |                     |                     |
	| ssh            | functional-355715 ssh findmnt                                                | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | -T /mount3                                                                   |                             |         |         |                     |                     |
	| mount          | -p functional-355715                                                         | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC |                     |
	|                | --kill=true                                                                  |                             |         |         |                     |                     |
	| update-context | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-355715 image ls                                                   | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	| update-context | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-355715 image load                                                 | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-355715 image ls                                                   | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	| image          | functional-355715 image save --daemon                                        | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-355715                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-355715 ssh pgrep                                                  | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-355715 image build -t                                             | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:12 UTC |
	|                | localhost/my-image:functional-355715                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-355715                                                            | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:11 UTC | 12 Dec 23 22:11 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-355715 image ls                                                   | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:12 UTC | 12 Dec 23 22:12 UTC |
	| delete         | -p functional-355715                                                         | functional-355715           | jenkins | v1.32.0 | 12 Dec 23 22:12 UTC | 12 Dec 23 22:12 UTC |
	| start          | -p ingress-addon-legacy-036387                                               | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:12 UTC | 12 Dec 23 22:13 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-036387                                                  | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:13 UTC | 12 Dec 23 22:13 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-036387                                                  | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:13 UTC | 12 Dec 23 22:13 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-036387                                                  | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:13 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-036387 ip                                               | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:16 UTC | 12 Dec 23 22:16 UTC |
	| addons         | ingress-addon-legacy-036387                                                  | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:16 UTC | 12 Dec 23 22:16 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-036387                                                  | ingress-addon-legacy-036387 | jenkins | v1.32.0 | 12 Dec 23 22:16 UTC | 12 Dec 23 22:16 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:12:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:12:14.119756   55774 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:12:14.119910   55774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:12:14.119922   55774 out.go:309] Setting ErrFile to fd 2...
	I1212 22:12:14.119931   55774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:12:14.120124   55774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:12:14.120804   55774 out.go:303] Setting JSON to false
	I1212 22:12:14.122215   55774 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3286,"bootTime":1702415848,"procs":619,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:12:14.122286   55774 start.go:138] virtualization: kvm guest
	I1212 22:12:14.124774   55774 out.go:177] * [ingress-addon-legacy-036387] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:12:14.126196   55774 notify.go:220] Checking for updates...
	I1212 22:12:14.127734   55774 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:12:14.129092   55774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:12:14.130557   55774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:12:14.132087   55774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:12:14.133606   55774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:12:14.134935   55774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:12:14.136425   55774 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:12:14.157228   55774 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:12:14.157315   55774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:12:14.208454   55774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-12 22:12:14.199799013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:12:14.208553   55774 docker.go:295] overlay module found
	I1212 22:12:14.210414   55774 out.go:177] * Using the docker driver based on user configuration
	I1212 22:12:14.211751   55774 start.go:298] selected driver: docker
	I1212 22:12:14.211762   55774 start.go:902] validating driver "docker" against <nil>
	I1212 22:12:14.211771   55774 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:12:14.212517   55774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:12:14.261536   55774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-12-12 22:12:14.253546583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:12:14.261684   55774 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:12:14.261902   55774 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:12:14.263948   55774 out.go:177] * Using Docker driver with root privileges
	I1212 22:12:14.265345   55774 cni.go:84] Creating CNI manager for ""
	I1212 22:12:14.265362   55774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:12:14.265373   55774 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:12:14.265389   55774 start_flags.go:323] config:
	{Name:ingress-addon-legacy-036387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-036387 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:12:14.267004   55774 out.go:177] * Starting control plane node ingress-addon-legacy-036387 in cluster ingress-addon-legacy-036387
	I1212 22:12:14.268326   55774 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:12:14.269758   55774 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:12:14.271144   55774 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:12:14.271201   55774 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:12:14.286819   55774 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 22:12:14.286842   55774 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 22:12:14.304742   55774 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 22:12:14.304764   55774 cache.go:56] Caching tarball of preloaded images
	I1212 22:12:14.304892   55774 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:12:14.306684   55774 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1212 22:12:14.308033   55774 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:12:14.430246   55774 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1212 22:12:19.742904   55774 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:12:19.742999   55774 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:12:20.745463   55774 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1212 22:12:20.745825   55774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/config.json ...
	I1212 22:12:20.745859   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/config.json: {Name:mkd1b66f3745398fee89757a846514aac83cf57e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:20.746044   55774 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:12:20.746072   55774 start.go:365] acquiring machines lock for ingress-addon-legacy-036387: {Name:mk442a033ffe23feddfa76d3e24c503b38b6c15d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:12:20.746138   55774 start.go:369] acquired machines lock for "ingress-addon-legacy-036387" in 48.986µs
	I1212 22:12:20.746171   55774 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-036387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-036387 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:12:20.746257   55774 start.go:125] createHost starting for "" (driver="docker")
	I1212 22:12:20.748687   55774 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 22:12:20.748937   55774 start.go:159] libmachine.API.Create for "ingress-addon-legacy-036387" (driver="docker")
	I1212 22:12:20.748968   55774 client.go:168] LocalClient.Create starting
	I1212 22:12:20.749059   55774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem
	I1212 22:12:20.749101   55774 main.go:141] libmachine: Decoding PEM data...
	I1212 22:12:20.749126   55774 main.go:141] libmachine: Parsing certificate...
	I1212 22:12:20.749187   55774 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem
	I1212 22:12:20.749214   55774 main.go:141] libmachine: Decoding PEM data...
	I1212 22:12:20.749233   55774 main.go:141] libmachine: Parsing certificate...
	I1212 22:12:20.749565   55774 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-036387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 22:12:20.765081   55774 cli_runner.go:211] docker network inspect ingress-addon-legacy-036387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 22:12:20.765138   55774 network_create.go:281] running [docker network inspect ingress-addon-legacy-036387] to gather additional debugging logs...
	I1212 22:12:20.765162   55774 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-036387
	W1212 22:12:20.779970   55774 cli_runner.go:211] docker network inspect ingress-addon-legacy-036387 returned with exit code 1
	I1212 22:12:20.780003   55774 network_create.go:284] error running [docker network inspect ingress-addon-legacy-036387]: docker network inspect ingress-addon-legacy-036387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-036387 not found
	I1212 22:12:20.780016   55774 network_create.go:286] output of [docker network inspect ingress-addon-legacy-036387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-036387 not found
	
	** /stderr **
	I1212 22:12:20.780108   55774 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:12:20.795983   55774 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000143d0}
	I1212 22:12:20.796023   55774 network_create.go:124] attempt to create docker network ingress-addon-legacy-036387 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 22:12:20.796071   55774 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-036387 ingress-addon-legacy-036387
	I1212 22:12:20.847384   55774 network_create.go:108] docker network ingress-addon-legacy-036387 192.168.49.0/24 created
	I1212 22:12:20.847413   55774 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-036387" container
	I1212 22:12:20.847467   55774 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 22:12:20.861774   55774 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-036387 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-036387 --label created_by.minikube.sigs.k8s.io=true
	I1212 22:12:20.877370   55774 oci.go:103] Successfully created a docker volume ingress-addon-legacy-036387
	I1212 22:12:20.877447   55774 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-036387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-036387 --entrypoint /usr/bin/test -v ingress-addon-legacy-036387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 22:12:22.587334   55774 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-036387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-036387 --entrypoint /usr/bin/test -v ingress-addon-legacy-036387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib: (1.709849637s)
	I1212 22:12:22.587368   55774 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-036387
	I1212 22:12:22.587404   55774 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:12:22.587433   55774 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 22:12:22.587501   55774 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-036387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 22:12:27.840824   55774 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-036387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (5.253274036s)
	I1212 22:12:27.840855   55774 kic.go:203] duration metric: took 5.253419 seconds to extract preloaded images to volume
	W1212 22:12:27.840985   55774 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 22:12:27.841111   55774 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 22:12:27.891500   55774 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-036387 --name ingress-addon-legacy-036387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-036387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-036387 --network ingress-addon-legacy-036387 --ip 192.168.49.2 --volume ingress-addon-legacy-036387:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:12:28.210238   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Running}}
	I1212 22:12:28.227423   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:12:28.245513   55774 cli_runner.go:164] Run: docker exec ingress-addon-legacy-036387 stat /var/lib/dpkg/alternatives/iptables
	I1212 22:12:28.299042   55774 oci.go:144] the created container "ingress-addon-legacy-036387" has a running status.
	I1212 22:12:28.299076   55774 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa...
	I1212 22:12:28.395321   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 22:12:28.395382   55774 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 22:12:28.414271   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:12:28.430050   55774 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 22:12:28.430079   55774 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-036387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 22:12:28.493595   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:12:28.512793   55774 machine.go:88] provisioning docker machine ...
	I1212 22:12:28.512825   55774 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-036387"
	I1212 22:12:28.512888   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:28.532293   55774 main.go:141] libmachine: Using SSH client type: native
	I1212 22:12:28.532745   55774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1212 22:12:28.532765   55774 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-036387 && echo "ingress-addon-legacy-036387" | sudo tee /etc/hostname
	I1212 22:12:28.533415   55774 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53130->127.0.0.1:32787: read: connection reset by peer
	I1212 22:12:31.665304   55774 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-036387
	
	I1212 22:12:31.665384   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:31.681267   55774 main.go:141] libmachine: Using SSH client type: native
	I1212 22:12:31.681598   55774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1212 22:12:31.681627   55774 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-036387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-036387/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-036387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:12:31.799268   55774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:12:31.799298   55774 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:12:31.799343   55774 ubuntu.go:177] setting up certificates
	I1212 22:12:31.799360   55774 provision.go:83] configureAuth start
	I1212 22:12:31.799418   55774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-036387
	I1212 22:12:31.815082   55774 provision.go:138] copyHostCerts
	I1212 22:12:31.815125   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:12:31.815160   55774 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem, removing ...
	I1212 22:12:31.815170   55774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:12:31.815255   55774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:12:31.815371   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:12:31.815409   55774 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem, removing ...
	I1212 22:12:31.815420   55774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:12:31.815464   55774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:12:31.815540   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:12:31.815583   55774 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem, removing ...
	I1212 22:12:31.815595   55774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:12:31.815638   55774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:12:31.815717   55774 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-036387 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-036387]
	I1212 22:12:31.930589   55774 provision.go:172] copyRemoteCerts
	I1212 22:12:31.930644   55774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:12:31.930679   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:31.946241   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:12:32.035324   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:12:32.035401   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:12:32.055896   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:12:32.055955   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:12:32.075742   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:12:32.075788   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1212 22:12:32.095493   55774 provision.go:86] duration metric: configureAuth took 296.120243ms
	I1212 22:12:32.095520   55774 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:12:32.095728   55774 config.go:182] Loaded profile config "ingress-addon-legacy-036387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 22:12:32.095814   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:32.111855   55774 main.go:141] libmachine: Using SSH client type: native
	I1212 22:12:32.112290   55774 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1212 22:12:32.112317   55774 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:12:32.335683   55774 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:12:32.335706   55774 machine.go:91] provisioned docker machine in 3.822894176s
	I1212 22:12:32.335717   55774 client.go:171] LocalClient.Create took 11.586739125s
	I1212 22:12:32.335733   55774 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-036387" took 11.586795279s
	I1212 22:12:32.335739   55774 start.go:300] post-start starting for "ingress-addon-legacy-036387" (driver="docker")
	I1212 22:12:32.335748   55774 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:12:32.335804   55774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:12:32.335855   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:32.351734   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:12:32.444351   55774 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:12:32.447284   55774 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:12:32.447322   55774 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:12:32.447336   55774 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:12:32.447344   55774 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 22:12:32.447357   55774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:12:32.447414   55774 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:12:32.447503   55774 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> 163992.pem in /etc/ssl/certs
	I1212 22:12:32.447513   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /etc/ssl/certs/163992.pem
	I1212 22:12:32.447640   55774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:12:32.454831   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:12:32.474980   55774 start.go:303] post-start completed in 139.230565ms
	I1212 22:12:32.475282   55774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-036387
	I1212 22:12:32.491342   55774 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/config.json ...
	I1212 22:12:32.491607   55774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:12:32.491658   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:32.506589   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:12:32.592037   55774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:12:32.595796   55774 start.go:128] duration metric: createHost completed in 11.849528428s
	I1212 22:12:32.595818   55774 start.go:83] releasing machines lock for "ingress-addon-legacy-036387", held for 11.849663981s
	I1212 22:12:32.595883   55774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-036387
	I1212 22:12:32.612501   55774 ssh_runner.go:195] Run: cat /version.json
	I1212 22:12:32.612531   55774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:12:32.612553   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:32.612582   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:12:32.630705   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:12:32.631306   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:12:32.803913   55774 ssh_runner.go:195] Run: systemctl --version
	I1212 22:12:32.807846   55774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:12:32.944497   55774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:12:32.948552   55774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:12:32.965388   55774 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:12:32.965457   55774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:12:32.990291   55774 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 22:12:32.990312   55774 start.go:475] detecting cgroup driver to use...
	I1212 22:12:32.990345   55774 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:12:32.990400   55774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:12:33.003428   55774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:12:33.012926   55774 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:12:33.012986   55774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:12:33.024778   55774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:12:33.036867   55774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:12:33.108015   55774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:12:33.194569   55774 docker.go:219] disabling docker service ...
	I1212 22:12:33.194626   55774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:12:33.211150   55774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:12:33.221029   55774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:12:33.293448   55774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:12:33.367349   55774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:12:33.377024   55774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:12:33.390415   55774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 22:12:33.390462   55774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:12:33.398875   55774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:12:33.398933   55774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:12:33.407527   55774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:12:33.415659   55774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:12:33.423831   55774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:12:33.431258   55774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:12:33.437952   55774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:12:33.445025   55774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:12:33.514703   55774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:12:33.616299   55774 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:12:33.616361   55774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:12:33.619541   55774 start.go:543] Will wait 60s for crictl version
	I1212 22:12:33.619614   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:33.622352   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:12:33.652690   55774 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 22:12:33.652767   55774 ssh_runner.go:195] Run: crio --version
	I1212 22:12:33.684686   55774 ssh_runner.go:195] Run: crio --version
	I1212 22:12:33.718661   55774 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1212 22:12:33.720136   55774 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-036387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:12:33.736320   55774 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 22:12:33.739874   55774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:12:33.749654   55774 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1212 22:12:33.749720   55774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:12:33.792304   55774 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 22:12:33.792356   55774 ssh_runner.go:195] Run: which lz4
	I1212 22:12:33.795612   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 22:12:33.795684   55774 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 22:12:33.798602   55774 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 22:12:33.798626   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1212 22:12:34.719982   55774 crio.go:444] Took 0.924319 seconds to copy over tarball
	I1212 22:12:34.720050   55774 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 22:12:36.948565   55774 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.228471094s)
	I1212 22:12:36.948596   55774 crio.go:451] Took 2.228586 seconds to extract the tarball
	I1212 22:12:36.948605   55774 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 22:12:37.016656   55774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:12:37.047266   55774 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1212 22:12:37.047296   55774 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 22:12:37.047382   55774 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:12:37.047406   55774 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:12:37.047425   55774 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:12:37.047380   55774 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:12:37.047408   55774 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1212 22:12:37.047382   55774 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:12:37.047427   55774 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1212 22:12:37.047411   55774 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:12:37.048559   55774 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:12:37.048579   55774 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1212 22:12:37.048632   55774 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:12:37.048638   55774 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:12:37.048638   55774 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:12:37.048559   55774 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:12:37.048657   55774 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 22:12:37.048566   55774 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:12:37.264360   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 22:12:37.266523   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:12:37.299357   55774 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 22:12:37.299401   55774 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 22:12:37.299444   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.303935   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:12:37.314989   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:12:37.340032   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:12:37.345757   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1212 22:12:37.357960   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1212 22:12:37.372864   55774 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:12:37.418470   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 22:12:37.418504   55774 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1212 22:12:37.418535   55774 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:12:37.418577   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.418621   55774 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1212 22:12:37.418666   55774 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:12:37.418700   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.418700   55774 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1212 22:12:37.418778   55774 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:12:37.418746   55774 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1212 22:12:37.418852   55774 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1212 22:12:37.418884   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.418807   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.434819   55774 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1212 22:12:37.434871   55774 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1212 22:12:37.434920   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.440296   55774 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1212 22:12:37.440323   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1212 22:12:37.440331   55774 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:12:37.440378   55774 ssh_runner.go:195] Run: which crictl
	I1212 22:12:37.453885   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 22:12:37.453975   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1212 22:12:37.453992   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1212 22:12:37.454023   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1212 22:12:37.454076   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1212 22:12:37.454077   55774 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1212 22:12:37.534160   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1212 22:12:37.620499   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1212 22:12:37.620588   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1212 22:12:37.621096   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1212 22:12:37.621310   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1212 22:12:37.621314   55774 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1212 22:12:37.621354   55774 cache_images.go:92] LoadImages completed in 574.044248ms
	W1212 22:12:37.621432   55774 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1212 22:12:37.621489   55774 ssh_runner.go:195] Run: crio config
	I1212 22:12:37.659868   55774 cni.go:84] Creating CNI manager for ""
	I1212 22:12:37.659887   55774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:12:37.659904   55774 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:12:37.659923   55774 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-036387 NodeName:ingress-addon-legacy-036387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 22:12:37.660048   55774 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-036387"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:12:37.660117   55774 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-036387 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-036387 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:12:37.660160   55774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1212 22:12:37.667783   55774 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:12:37.667837   55774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:12:37.675029   55774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1212 22:12:37.689875   55774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1212 22:12:37.705006   55774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1212 22:12:37.720108   55774 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 22:12:37.723054   55774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:12:37.732401   55774 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387 for IP: 192.168.49.2
	I1212 22:12:37.732434   55774 certs.go:190] acquiring lock for shared ca certs: {Name:mkef1e7b14f91e4f04d1e9cbbafdc8c42ba43b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:37.732589   55774 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key
	I1212 22:12:37.732637   55774 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key
	I1212 22:12:37.732693   55774 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key
	I1212 22:12:37.732715   55774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt with IP's: []
	I1212 22:12:37.824423   55774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt ...
	I1212 22:12:37.824453   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: {Name:mk11e3002eccecf6e4ca979673aaf97aa9eaa33a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:37.824639   55774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key ...
	I1212 22:12:37.824658   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key: {Name:mkf18902b85aae389ccca043be162b70a0a5218e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:37.824755   55774 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key.dd3b5fb2
	I1212 22:12:37.824778   55774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:12:37.920112   55774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt.dd3b5fb2 ...
	I1212 22:12:37.920140   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt.dd3b5fb2: {Name:mk82d5fa91f82fe9d9ab670ab44c7f4e07db4be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:37.920325   55774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key.dd3b5fb2 ...
	I1212 22:12:37.920350   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key.dd3b5fb2: {Name:mk3e25429ee43962844dae49e3f9442c7cb4c933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:37.920445   55774 certs.go:337] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt
	I1212 22:12:37.920527   55774 certs.go:341] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key
	I1212 22:12:37.920597   55774 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.key
	I1212 22:12:37.920617   55774 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.crt with IP's: []
	I1212 22:12:38.014990   55774 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.crt ...
	I1212 22:12:38.015027   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.crt: {Name:mk01d6d186fbd382fa5eea144e246543e4030f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:38.015230   55774 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.key ...
	I1212 22:12:38.015254   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.key: {Name:mk9e876d0e445088ad2b1a6c191b2413250a6c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:12:38.015367   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 22:12:38.015410   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 22:12:38.015432   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 22:12:38.015453   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 22:12:38.015468   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:12:38.015496   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:12:38.015518   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:12:38.015544   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:12:38.015631   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem (1338 bytes)
	W1212 22:12:38.015685   55774 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399_empty.pem, impossibly tiny 0 bytes
	I1212 22:12:38.015703   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 22:12:38.015740   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:12:38.015782   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:12:38.015825   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem (1675 bytes)
	I1212 22:12:38.015895   55774 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:12:38.015949   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem -> /usr/share/ca-certificates/16399.pem
	I1212 22:12:38.015972   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /usr/share/ca-certificates/163992.pem
	I1212 22:12:38.015991   55774 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:12:38.016667   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:12:38.037247   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:12:38.056975   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:12:38.076503   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:12:38.096214   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:12:38.115914   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:12:38.135443   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:12:38.154857   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:12:38.174823   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem --> /usr/share/ca-certificates/16399.pem (1338 bytes)
	I1212 22:12:38.195304   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /usr/share/ca-certificates/163992.pem (1708 bytes)
	I1212 22:12:38.215705   55774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:12:38.235882   55774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:12:38.250720   55774 ssh_runner.go:195] Run: openssl version
	I1212 22:12:38.255583   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163992.pem && ln -fs /usr/share/ca-certificates/163992.pem /etc/ssl/certs/163992.pem"
	I1212 22:12:38.263491   55774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163992.pem
	I1212 22:12:38.266349   55774 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:08 /usr/share/ca-certificates/163992.pem
	I1212 22:12:38.266401   55774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163992.pem
	I1212 22:12:38.272357   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163992.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:12:38.280159   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:12:38.287922   55774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:12:38.290703   55774 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:12:38.290752   55774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:12:38.296485   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:12:38.303875   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16399.pem && ln -fs /usr/share/ca-certificates/16399.pem /etc/ssl/certs/16399.pem"
	I1212 22:12:38.311387   55774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16399.pem
	I1212 22:12:38.314213   55774 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:08 /usr/share/ca-certificates/16399.pem
	I1212 22:12:38.314260   55774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16399.pem
	I1212 22:12:38.320062   55774 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16399.pem /etc/ssl/certs/51391683.0"
	I1212 22:12:38.327620   55774 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:12:38.330334   55774 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:12:38.330383   55774 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-036387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-036387 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:12:38.330469   55774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:12:38.330509   55774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:12:38.362603   55774 cri.go:89] found id: ""
	I1212 22:12:38.362662   55774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:12:38.370434   55774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:12:38.377668   55774 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 22:12:38.377729   55774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:12:38.384971   55774 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:12:38.385006   55774 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 22:12:38.425154   55774 kubeadm.go:322] W1212 22:12:38.424616    1383 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1212 22:12:38.463362   55774 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:12:38.530704   55774 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:12:40.922007   55774 kubeadm.go:322] W1212 22:12:40.921710    1383 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 22:12:40.922944   55774 kubeadm.go:322] W1212 22:12:40.922737    1383 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1212 22:12:49.878345   55774 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1212 22:12:49.878393   55774 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:12:49.878462   55774 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:12:49.878512   55774 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:12:49.878583   55774 kubeadm.go:322] OS: Linux
	I1212 22:12:49.878666   55774 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 22:12:49.878714   55774 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 22:12:49.878757   55774 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 22:12:49.878798   55774 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 22:12:49.878838   55774 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 22:12:49.878880   55774 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 22:12:49.878938   55774 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:12:49.879078   55774 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:12:49.879190   55774 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:12:49.879321   55774 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:12:49.879413   55774 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:12:49.879473   55774 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:12:49.879594   55774 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:12:49.881169   55774 out.go:204]   - Generating certificates and keys ...
	I1212 22:12:49.881257   55774 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:12:49.881340   55774 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:12:49.881442   55774 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:12:49.881515   55774 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:12:49.881581   55774 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:12:49.881623   55774 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:12:49.881667   55774 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:12:49.881778   55774 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-036387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:12:49.881828   55774 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:12:49.881929   55774 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-036387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 22:12:49.881981   55774 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:12:49.882032   55774 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:12:49.882079   55774 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:12:49.882122   55774 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:12:49.882181   55774 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:12:49.882229   55774 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:12:49.882287   55774 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:12:49.882334   55774 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:12:49.882412   55774 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:12:49.883839   55774 out.go:204]   - Booting up control plane ...
	I1212 22:12:49.883919   55774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:12:49.883986   55774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:12:49.884046   55774 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:12:49.884114   55774 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:12:49.884235   55774 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:12:49.884306   55774 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.501878 seconds
	I1212 22:12:49.884392   55774 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:12:49.884505   55774 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:12:49.884554   55774 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:12:49.884677   55774 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-036387 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1212 22:12:49.884750   55774 kubeadm.go:322] [bootstrap-token] Using token: uyjxuu.rn0r7xqyi4ltmpgu
	I1212 22:12:49.886129   55774 out.go:204]   - Configuring RBAC rules ...
	I1212 22:12:49.886248   55774 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:12:49.886317   55774 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:12:49.886443   55774 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:12:49.886562   55774 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:12:49.886654   55774 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:12:49.886733   55774 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:12:49.886834   55774 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:12:49.886886   55774 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:12:49.886927   55774 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:12:49.886933   55774 kubeadm.go:322] 
	I1212 22:12:49.886979   55774 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:12:49.886985   55774 kubeadm.go:322] 
	I1212 22:12:49.887057   55774 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:12:49.887065   55774 kubeadm.go:322] 
	I1212 22:12:49.887086   55774 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:12:49.887137   55774 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:12:49.887186   55774 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:12:49.887201   55774 kubeadm.go:322] 
	I1212 22:12:49.887245   55774 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:12:49.887312   55774 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:12:49.887385   55774 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:12:49.887391   55774 kubeadm.go:322] 
	I1212 22:12:49.887466   55774 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:12:49.887531   55774 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:12:49.887537   55774 kubeadm.go:322] 
	I1212 22:12:49.887645   55774 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uyjxuu.rn0r7xqyi4ltmpgu \
	I1212 22:12:49.887772   55774 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f \
	I1212 22:12:49.887812   55774 kubeadm.go:322]     --control-plane 
	I1212 22:12:49.887821   55774 kubeadm.go:322] 
	I1212 22:12:49.887906   55774 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:12:49.887915   55774 kubeadm.go:322] 
	I1212 22:12:49.887982   55774 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uyjxuu.rn0r7xqyi4ltmpgu \
	I1212 22:12:49.888089   55774 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:12:49.888100   55774 cni.go:84] Creating CNI manager for ""
	I1212 22:12:49.888105   55774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:12:49.889427   55774 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:12:49.890691   55774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:12:49.894710   55774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1212 22:12:49.894727   55774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:12:49.910218   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:12:50.327399   55774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:12:50.327485   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:50.327485   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=ingress-addon-legacy-036387 minikube.k8s.io/updated_at=2023_12_12T22_12_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:50.430716   55774 ops.go:34] apiserver oom_adj: -16
	I1212 22:12:50.430728   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:50.495361   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:51.074587   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:51.574185   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:52.074279   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:52.574149   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:53.074297   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:53.574221   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:54.074749   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:54.574280   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:55.074793   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:55.574926   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:56.074850   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:56.574399   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:57.074780   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:57.574281   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:58.074096   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:58.574795   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:59.074844   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:12:59.574808   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:00.074415   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:00.574841   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:01.074569   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:01.574671   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:02.074897   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:02.573989   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:03.074132   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:03.574612   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:04.074340   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:04.574258   55774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:13:04.637901   55774 kubeadm.go:1088] duration metric: took 14.310474985s to wait for elevateKubeSystemPrivileges.
	I1212 22:13:04.637938   55774 kubeadm.go:406] StartCluster complete in 26.307559907s
	I1212 22:13:04.637960   55774 settings.go:142] acquiring lock: {Name:mk857225ea2f0544984670c00dbb01f431ce59c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:13:04.638015   55774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:13:04.638687   55774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/kubeconfig: {Name:mkd3e8de36f0003ff040c445ac6e47a46685daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:13:04.638889   55774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:13:04.638906   55774 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 22:13:04.638972   55774 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-036387"
	I1212 22:13:04.639000   55774 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-036387"
	I1212 22:13:04.638972   55774 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-036387"
	I1212 22:13:04.639096   55774 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-036387"
	I1212 22:13:04.639106   55774 config.go:182] Loaded profile config "ingress-addon-legacy-036387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1212 22:13:04.639166   55774 host.go:66] Checking if "ingress-addon-legacy-036387" exists ...
	I1212 22:13:04.639416   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:13:04.639537   55774 kapi.go:59] client config for ingress-addon-legacy-036387: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:13:04.639651   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:13:04.640357   55774 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 22:13:04.658315   55774 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-036387" context rescaled to 1 replicas
	I1212 22:13:04.658346   55774 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:13:04.660312   55774 out.go:177] * Verifying Kubernetes components...
	I1212 22:13:04.658841   55774 kapi.go:59] client config for ingress-addon-legacy-036387: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:13:04.662180   55774 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:13:04.660567   55774 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-036387"
	I1212 22:13:04.662133   55774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:13:04.663737   55774 host.go:66] Checking if "ingress-addon-legacy-036387" exists ...
	I1212 22:13:04.663795   55774 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:13:04.663811   55774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:13:04.663862   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:13:04.664283   55774 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-036387 --format={{.State.Status}}
	I1212 22:13:04.683832   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:13:04.684080   55774 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:13:04.684096   55774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:13:04.684144   55774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-036387
	I1212 22:13:04.700780   55774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/ingress-addon-legacy-036387/id_rsa Username:docker}
	I1212 22:13:04.729288   55774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:13:04.729776   55774 kapi.go:59] client config for ingress-addon-legacy-036387: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:13:04.730115   55774 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-036387" to be "Ready" ...
	I1212 22:13:04.822849   55774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:13:04.835786   55774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:13:05.230688   55774 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 22:13:05.365574   55774 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 22:13:05.366875   55774 addons.go:502] enable addons completed in 727.968709ms: enabled=[storage-provisioner default-storageclass]
	I1212 22:13:06.738712   55774 node_ready.go:58] node "ingress-addon-legacy-036387" has status "Ready":"False"
	I1212 22:13:09.238531   55774 node_ready.go:58] node "ingress-addon-legacy-036387" has status "Ready":"False"
	I1212 22:13:10.412556   55774 node_ready.go:49] node "ingress-addon-legacy-036387" has status "Ready":"True"
	I1212 22:13:10.412590   55774 node_ready.go:38] duration metric: took 5.682446926s waiting for node "ingress-addon-legacy-036387" to be "Ready" ...
	I1212 22:13:10.412602   55774 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:13:10.660611   55774 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:12.890600   55774 pod_ready.go:102] pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-12 22:13:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1212 22:13:14.892617   55774 pod_ready.go:102] pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace has status "Ready":"False"
	I1212 22:13:16.911693   55774 pod_ready.go:102] pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace has status "Ready":"False"
	I1212 22:13:19.393161   55774 pod_ready.go:102] pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace has status "Ready":"False"
	I1212 22:13:20.392189   55774 pod_ready.go:92] pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.392217   55774 pod_ready.go:81] duration metric: took 9.731573714s waiting for pod "coredns-66bff467f8-mgc6x" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.392225   55774 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.397358   55774 pod_ready.go:92] pod "etcd-ingress-addon-legacy-036387" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.397376   55774 pod_ready.go:81] duration metric: took 5.145311ms waiting for pod "etcd-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.397386   55774 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.401127   55774 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-036387" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.401147   55774 pod_ready.go:81] duration metric: took 3.753488ms waiting for pod "kube-apiserver-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.401158   55774 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.404862   55774 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-036387" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.404879   55774 pod_ready.go:81] duration metric: took 3.713754ms waiting for pod "kube-controller-manager-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.404887   55774 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6rx7" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.408375   55774 pod_ready.go:92] pod "kube-proxy-r6rx7" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.408397   55774 pod_ready.go:81] duration metric: took 3.500175ms waiting for pod "kube-proxy-r6rx7" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.408405   55774 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.588800   55774 request.go:629] Waited for 180.332355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-036387
	I1212 22:13:20.788447   55774 request.go:629] Waited for 197.034284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-036387
	I1212 22:13:20.791092   55774 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-036387" in "kube-system" namespace has status "Ready":"True"
	I1212 22:13:20.791116   55774 pod_ready.go:81] duration metric: took 382.704038ms waiting for pod "kube-scheduler-ingress-addon-legacy-036387" in "kube-system" namespace to be "Ready" ...
	I1212 22:13:20.791129   55774 pod_ready.go:38] duration metric: took 10.378504787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:13:20.791146   55774 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:13:20.791205   55774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:13:20.801563   55774 api_server.go:72] duration metric: took 16.143192393s to wait for apiserver process to appear ...
	I1212 22:13:20.801590   55774 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:13:20.801606   55774 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 22:13:20.806096   55774 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 22:13:20.806887   55774 api_server.go:141] control plane version: v1.18.20
	I1212 22:13:20.806909   55774 api_server.go:131] duration metric: took 5.313217ms to wait for apiserver health ...
	I1212 22:13:20.806917   55774 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:13:20.988255   55774 request.go:629] Waited for 181.254915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:13:20.993285   55774 system_pods.go:59] 8 kube-system pods found
	I1212 22:13:20.993309   55774 system_pods.go:61] "coredns-66bff467f8-mgc6x" [087b9e8e-abce-4a7f-ae20-2c81eb6d71bd] Running
	I1212 22:13:20.993314   55774 system_pods.go:61] "etcd-ingress-addon-legacy-036387" [ec51032d-76bc-4ced-834b-931884ace670] Running
	I1212 22:13:20.993318   55774 system_pods.go:61] "kindnet-zk58q" [ebb30275-1bca-4b83-ba15-c4693bba1474] Running
	I1212 22:13:20.993322   55774 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-036387" [b6027b30-e94a-4a6c-a832-84d0d33a72f5] Running
	I1212 22:13:20.993331   55774 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-036387" [9c782866-5993-49d6-bf4e-29cf222c339a] Running
	I1212 22:13:20.993335   55774 system_pods.go:61] "kube-proxy-r6rx7" [858bee3e-21b8-4585-840c-89c9bb7603f0] Running
	I1212 22:13:20.993339   55774 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-036387" [cccc33e1-32a0-4256-9899-27a39c1b087a] Running
	I1212 22:13:20.993343   55774 system_pods.go:61] "storage-provisioner" [70c5b66f-8570-4dc5-82bc-be0e253ffdd1] Running
	I1212 22:13:20.993349   55774 system_pods.go:74] duration metric: took 186.427491ms to wait for pod list to return data ...
	I1212 22:13:20.993359   55774 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:13:21.188759   55774 request.go:629] Waited for 195.341652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:13:21.190936   55774 default_sa.go:45] found service account: "default"
	I1212 22:13:21.190958   55774 default_sa.go:55] duration metric: took 197.593752ms for default service account to be created ...
	I1212 22:13:21.190965   55774 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:13:21.388378   55774 request.go:629] Waited for 197.337664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:13:21.393581   55774 system_pods.go:86] 8 kube-system pods found
	I1212 22:13:21.393610   55774 system_pods.go:89] "coredns-66bff467f8-mgc6x" [087b9e8e-abce-4a7f-ae20-2c81eb6d71bd] Running
	I1212 22:13:21.393616   55774 system_pods.go:89] "etcd-ingress-addon-legacy-036387" [ec51032d-76bc-4ced-834b-931884ace670] Running
	I1212 22:13:21.393623   55774 system_pods.go:89] "kindnet-zk58q" [ebb30275-1bca-4b83-ba15-c4693bba1474] Running
	I1212 22:13:21.393631   55774 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-036387" [b6027b30-e94a-4a6c-a832-84d0d33a72f5] Running
	I1212 22:13:21.393638   55774 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-036387" [9c782866-5993-49d6-bf4e-29cf222c339a] Running
	I1212 22:13:21.393653   55774 system_pods.go:89] "kube-proxy-r6rx7" [858bee3e-21b8-4585-840c-89c9bb7603f0] Running
	I1212 22:13:21.393660   55774 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-036387" [cccc33e1-32a0-4256-9899-27a39c1b087a] Running
	I1212 22:13:21.393672   55774 system_pods.go:89] "storage-provisioner" [70c5b66f-8570-4dc5-82bc-be0e253ffdd1] Running
	I1212 22:13:21.393683   55774 system_pods.go:126] duration metric: took 202.713201ms to wait for k8s-apps to be running ...
	I1212 22:13:21.393693   55774 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:13:21.393744   55774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:13:21.404088   55774 system_svc.go:56] duration metric: took 10.387164ms WaitForService to wait for kubelet.
	I1212 22:13:21.404107   55774 kubeadm.go:581] duration metric: took 16.745741651s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:13:21.404122   55774 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:13:21.588522   55774 request.go:629] Waited for 184.343542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1212 22:13:21.590975   55774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:13:21.591003   55774 node_conditions.go:123] node cpu capacity is 8
	I1212 22:13:21.591014   55774 node_conditions.go:105] duration metric: took 186.888037ms to run NodePressure ...
	I1212 22:13:21.591025   55774 start.go:228] waiting for startup goroutines ...
	I1212 22:13:21.591033   55774 start.go:233] waiting for cluster config update ...
	I1212 22:13:21.591050   55774 start.go:242] writing updated cluster config ...
	I1212 22:13:21.591381   55774 ssh_runner.go:195] Run: rm -f paused
	I1212 22:13:21.636072   55774 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1212 22:13:21.638257   55774 out.go:177] 
	W1212 22:13:21.639846   55774 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1212 22:13:21.641280   55774 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1212 22:13:21.642720   55774 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-036387" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 22:16:08 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:08.083989379Z" level=info msg="Started container" PID=4857 containerID=386cbcd0778dcdfe1adcf8ed0ce2c3330b511b0a7ba386825387217522871479 description=default/hello-world-app-5f5d8b66bb-774d9/hello-world-app id=d823524a-195d-4e6d-a093-9c1e288362be name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=27b90cd92b5724e2819458f4955427b921a70aee69505554228b5bee68945135
	Dec 12 22:16:16 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:16.034412222Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=6a4f28c6-1c59-4cb2-868c-9a6fd14e8862 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 12 22:16:22 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:22.035237653Z" level=info msg="Stopping pod sandbox: 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=21c3f2d4-29c0-4419-8836-9fcd52ac9529 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:22 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:22.036274961Z" level=info msg="Stopped pod sandbox: 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=21c3f2d4-29c0-4419-8836-9fcd52ac9529 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:22 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:22.484253210Z" level=info msg="Stopping pod sandbox: 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=8260e256-decf-4518-9efe-8fcc0dd2c9d0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:22 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:22.484302803Z" level=info msg="Stopped pod sandbox (already stopped): 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=8260e256-decf-4518-9efe-8fcc0dd2c9d0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:23 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:23.236422842Z" level=info msg="Stopping container: 48682a1c9f6a54e70926ca29e5e58fdc6299f16cd2cf671155ec6ea4c25e3bb6 (timeout: 2s)" id=6bc5c033-8d6c-469b-b9a6-fdeaaddf8856 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 12 22:16:23 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:23.238749271Z" level=info msg="Stopping container: 48682a1c9f6a54e70926ca29e5e58fdc6299f16cd2cf671155ec6ea4c25e3bb6 (timeout: 2s)" id=f514f674-2a62-431b-9ae5-05994003b750 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 12 22:16:24 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:24.034134016Z" level=info msg="Stopping pod sandbox: 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=79529bcb-45e3-42dd-9c95-dbb96044d5ee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:24 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:24.034184137Z" level=info msg="Stopped pod sandbox (already stopped): 5d23dc97a62717b1f780a8fe539fa1960073db57eafc836752d3aa98130137b4" id=79529bcb-45e3-42dd-9c95-dbb96044d5ee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.243506819Z" level=warning msg="Stopping container 48682a1c9f6a54e70926ca29e5e58fdc6299f16cd2cf671155ec6ea4c25e3bb6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=6bc5c033-8d6c-469b-b9a6-fdeaaddf8856 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 12 22:16:25 ingress-addon-legacy-036387 conmon[3400]: conmon 48682a1c9f6a54e70926 <ninfo>: container 3412 exited with status 137
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.388087699Z" level=info msg="Stopped container 48682a1c9f6a54e70926ca29e5e58fdc6299f16cd2cf671155ec6ea4c25e3bb6: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2fbw9/controller" id=f514f674-2a62-431b-9ae5-05994003b750 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.388114839Z" level=info msg="Stopped container 48682a1c9f6a54e70926ca29e5e58fdc6299f16cd2cf671155ec6ea4c25e3bb6: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2fbw9/controller" id=6bc5c033-8d6c-469b-b9a6-fdeaaddf8856 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.388697067Z" level=info msg="Stopping pod sandbox: 89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28" id=80f80605-76be-4f3d-bf33-1e2fde063c4f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.388708194Z" level=info msg="Stopping pod sandbox: 89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28" id=492435f8-42a5-43e1-a3a1-f560e3d7e2c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.391619716Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TCP5NGXS4ZUAGDEW - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NYTGDP5MI3FVJUIZ - [0:0]\n-X KUBE-HP-TCP5NGXS4ZUAGDEW\n-X KUBE-HP-NYTGDP5MI3FVJUIZ\nCOMMIT\n"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.392841343Z" level=info msg="Closing host port tcp:80"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.392875918Z" level=info msg="Closing host port tcp:443"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.393765934Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.393783767Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.393898844Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-2fbw9 Namespace:ingress-nginx ID:89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28 UID:7cc29650-3b8e-4e97-8a0c-ef2793e4eb09 NetNS:/var/run/netns/eaa85b7b-f68b-4246-8a11-c7233b2f8729 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.394010010Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-2fbw9 from CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.433021002Z" level=info msg="Stopped pod sandbox: 89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28" id=80f80605-76be-4f3d-bf33-1e2fde063c4f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 12 22:16:25 ingress-addon-legacy-036387 crio[962]: time="2023-12-12 22:16:25.433142518Z" level=info msg="Stopped pod sandbox (already stopped): 89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28" id=492435f8-42a5-43e1-a3a1-f560e3d7e2c5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	386cbcd0778dc       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            22 seconds ago      Running             hello-world-app           0                   27b90cd92b572       hello-world-app-5f5d8b66bb-774d9
	4cd7eb32fa83f       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   cecbd316b4078       nginx
	48682a1c9f6a5       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   89d0c7434a159       ingress-nginx-controller-7fcf777cb7-2fbw9
	f360c2562d933       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   c210d8b254b91       ingress-nginx-admission-patch-xl7kp
	60e74fa620fae       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   149b1ccc7d606       ingress-nginx-admission-create-l27qp
	360029decf21f       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   c667509507766       coredns-66bff467f8-mgc6x
	cbe69592014e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   468c95536ef50       storage-provisioner
	f09024d3be4da       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   728ad2123b974       kindnet-zk58q
	2d18a59252051       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   ec70fc5d2de50       kube-proxy-r6rx7
	aa0a5b4f7f98b       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   e84fe845aa12a       kube-scheduler-ingress-addon-legacy-036387
	9941121e0d53d       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   fb63e998b64b1       etcd-ingress-addon-legacy-036387
	44638f21329fd       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   4e5beb6604b1c       kube-apiserver-ingress-addon-legacy-036387
	6992aa7788dda       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   b80e56e7aad38       kube-controller-manager-ingress-addon-legacy-036387
	
	* 
	* ==> coredns [360029decf21fcb5d1d134665ae57fbc75e184d080f6883a0eb3c914d4493f4d] <==
	* [INFO] 10.244.0.5:54752 - 8336 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008021833s
	[INFO] 10.244.0.5:46789 - 58060 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004208073s
	[INFO] 10.244.0.5:54752 - 25592 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004180854s
	[INFO] 10.244.0.5:59442 - 63690 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004513733s
	[INFO] 10.244.0.5:44924 - 27631 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004617656s
	[INFO] 10.244.0.5:60678 - 60977 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004401325s
	[INFO] 10.244.0.5:43581 - 53850 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004552415s
	[INFO] 10.244.0.5:60930 - 39106 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004694716s
	[INFO] 10.244.0.5:53235 - 18751 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00457103s
	[INFO] 10.244.0.5:60678 - 57107 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005188973s
	[INFO] 10.244.0.5:54752 - 64199 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00505628s
	[INFO] 10.244.0.5:59442 - 53607 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005405631s
	[INFO] 10.244.0.5:44924 - 11411 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005329595s
	[INFO] 10.244.0.5:46789 - 3901 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005635185s
	[INFO] 10.244.0.5:43581 - 26539 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005332873s
	[INFO] 10.244.0.5:53235 - 3975 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005221168s
	[INFO] 10.244.0.5:54752 - 45723 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088697s
	[INFO] 10.244.0.5:60930 - 45899 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005381558s
	[INFO] 10.244.0.5:59442 - 42181 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060581s
	[INFO] 10.244.0.5:46789 - 30998 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048129s
	[INFO] 10.244.0.5:53235 - 30707 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038706s
	[INFO] 10.244.0.5:60678 - 41104 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00025532s
	[INFO] 10.244.0.5:43581 - 55436 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00017173s
	[INFO] 10.244.0.5:44924 - 41905 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00029341s
	[INFO] 10.244.0.5:60930 - 12962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000263472s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-036387
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-036387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=ingress-addon-legacy-036387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_12_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-036387
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:16:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:16:20 +0000   Tue, 12 Dec 2023 22:12:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:16:20 +0000   Tue, 12 Dec 2023 22:12:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:16:20 +0000   Tue, 12 Dec 2023 22:12:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:16:20 +0000   Tue, 12 Dec 2023 22:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-036387
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 beb1d2ffb43443a6b0f50d1d4ad1d163
	  System UUID:                a6bc8057-daf5-43fc-be9c-31496377dce8
	  Boot ID:                    e32ab69d-45ad-4e0a-b786-ce498c8395cb
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-774d9                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-mgc6x                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m26s
	  kube-system                 etcd-ingress-addon-legacy-036387                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kindnet-zk58q                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m25s
	  kube-system                 kube-apiserver-ingress-addon-legacy-036387             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-036387    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-proxy-r6rx7                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-scheduler-ingress-addon-legacy-036387             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m49s (x5 over 3m49s)  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x4 over 3m49s)  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m40s                  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m40s                  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m40s                  kubelet     Node ingress-addon-legacy-036387 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m20s                  kubelet     Node ingress-addon-legacy-036387 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004932] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006583] FS-Cache: N-cookie d=000000005393a62b{9p.inode} n=00000000a5f612b8
	[  +0.008742] FS-Cache: N-key=[8] '89a00f0200000000'
	[  +0.280632] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006754] FS-Cache: O-cookie d=000000005393a62b{9p.inode} n=00000000462ec489
	[  +0.007361] FS-Cache: O-key=[8] '99a00f0200000000'
	[  +0.004933] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006595] FS-Cache: N-cookie d=000000005393a62b{9p.inode} n=000000004d1e5a71
	[  +0.008728] FS-Cache: N-key=[8] '99a00f0200000000'
	[Dec12 22:12] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 22:13] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +1.020215] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +2.015789] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[Dec12 22:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +8.191161] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[ +16.130321] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[Dec12 22:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	
	* 
	* ==> etcd [9941121e0d53df5f4617c24767314786795641f038b6ec2225fcd84774617a03] <==
	* 2023-12-12 22:12:42.819577 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-12 22:12:42.819799 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-12 22:12:42.820437 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/12 22:12:43 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/12 22:12:43 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/12 22:12:43 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/12 22:12:43 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/12 22:12:43 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-12 22:12:43.448320 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-12 22:12:43.449256 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-12 22:12:43.449315 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-12 22:12:43.449365 I | etcdserver: published {Name:ingress-addon-legacy-036387 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-12 22:12:43.449390 I | embed: ready to serve client requests
	2023-12-12 22:12:43.449536 I | embed: ready to serve client requests
	2023-12-12 22:12:43.451124 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-12 22:12:43.451796 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-12 22:13:10.410129 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-mgc6x\" " with result "range_response_count:1 size:3753" took too long (195.654601ms) to execute
	2023-12-12 22:13:10.410260 W | etcdserver: request "header:<ID:8128025771825856211 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:370 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:2626 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >>" with result "size:16" took too long (126.583925ms) to execute
	2023-12-12 22:13:10.410387 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-mgc6x.17a0353a98eb85fc\" " with result "range_response_count:1 size:829" took too long (195.967462ms) to execute
	2023-12-12 22:13:10.410567 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-036387\" " with result "range_response_count:1 size:6390" took too long (174.023154ms) to execute
	2023-12-12 22:13:10.653858 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:37328" took too long (240.195746ms) to execute
	2023-12-12 22:13:10.653955 W | etcdserver: request "header:<ID:8128025771825856215 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.17a0353bdf486ac6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.17a0353bdf486ac6\" value_size:684 lease:8128025771825855723 >> failure:<>>" with result "size:16" took too long (108.835291ms) to execute
	2023-12-12 22:13:10.654080 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2695" took too long (109.57723ms) to execute
	2023-12-12 22:13:10.883768 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-mgc6x\" " with result "range_response_count:1 size:3753" took too long (222.255874ms) to execute
	2023-12-12 22:14:04.156627 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-ingress-dns-minikube.17a0354135206c32\" " with result "range_response_count:1 size:1207" took too long (119.981372ms) to execute
	
	* 
	* ==> kernel <==
	*  22:16:31 up 59 min,  0 users,  load average: 3.80, 1.60, 0.91
	Linux ingress-addon-legacy-036387 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f09024d3be4dae7ccb933a4bfdecb7fa85148a9bfec68e69a930db4babce478d] <==
	* I1212 22:14:28.785132       1 main.go:227] handling current node
	I1212 22:14:38.797237       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:14:38.797264       1 main.go:227] handling current node
	I1212 22:14:48.806547       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:14:48.806572       1 main.go:227] handling current node
	I1212 22:14:58.810112       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:14:58.810155       1 main.go:227] handling current node
	I1212 22:15:08.813509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:08.813541       1 main.go:227] handling current node
	I1212 22:15:18.817202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:18.817225       1 main.go:227] handling current node
	I1212 22:15:28.829149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:28.829173       1 main.go:227] handling current node
	I1212 22:15:38.832520       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:38.832543       1 main.go:227] handling current node
	I1212 22:15:48.842533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:48.842558       1 main.go:227] handling current node
	I1212 22:15:58.845639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:15:58.845664       1 main.go:227] handling current node
	I1212 22:16:08.849485       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:16:08.849512       1 main.go:227] handling current node
	I1212 22:16:18.858185       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:16:18.858214       1 main.go:227] handling current node
	I1212 22:16:28.862166       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1212 22:16:28.862189       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [44638f21329fd789a7edbdf5a3f1b27f144041afa6b8dadc5279b81b9f9e04db] <==
	* E1212 22:12:46.812839       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1212 22:12:46.915764       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1212 22:12:46.915816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 22:12:46.915835       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 22:12:46.916089       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:12:46.916469       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1212 22:12:47.806783       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1212 22:12:47.806941       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 22:12:47.811576       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1212 22:12:47.814156       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1212 22:12:47.814168       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1212 22:12:48.156256       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:12:48.181931       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1212 22:12:48.256242       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1212 22:12:48.257035       1 controller.go:609] quota admission added evaluator for: endpoints
	I1212 22:12:48.260019       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 22:12:49.095918       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1212 22:12:49.694072       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1212 22:12:49.864008       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1212 22:12:50.017685       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:13:04.918876       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1212 22:13:05.022807       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1212 22:13:22.323323       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1212 22:13:45.523065       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1212 22:16:23.245080       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [6992aa7788dda47ef04dc89f4231f8b4e3c7729ece2407e15d251d85b5fa43e9] <==
	* I1212 22:13:05.032676       1 range_allocator.go:373] Set node ingress-addon-legacy-036387 PodCIDR to [10.244.0.0/24]
	I1212 22:13:05.037047       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"1b0e366c-a59a-4ce9-9498-8c0fa1ef186a", APIVersion:"apps/v1", ResourceVersion:"213", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-r6rx7
	I1212 22:13:05.116814       1 shared_informer.go:230] Caches are synced for attach detach 
	I1212 22:13:05.117724       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 22:13:05.126020       1 shared_informer.go:230] Caches are synced for resource quota 
	I1212 22:13:05.126013       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"1e1f0c40-f5b6-4b61-8ebd-3bcb92849ea6", APIVersion:"apps/v1", ResourceVersion:"234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zk58q
	I1212 22:13:05.215751       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 22:13:05.215784       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 22:13:05.215758       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	E1212 22:13:05.225631       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"1e1f0c40-f5b6-4b61-8ebd-3bcb92849ea6", ResourceVersion:"234", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63838015970, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0015824a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0015824c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0015824e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0015825a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0015825c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001582600), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001582620)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001582660)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000881f40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a40028), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00027b110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000de838)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000a40070)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1212 22:13:05.227840       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E1212 22:13:05.238204       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1212 22:13:05.453173       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1212 22:13:05.453227       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1212 22:13:15.016909       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1212 22:13:22.278290       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"510ca641-6145-4a8d-b2db-ca4ec085ff7e", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1212 22:13:22.320950       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0a4213e9-747c-4374-a88d-b767338b75eb", APIVersion:"apps/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-2fbw9
	I1212 22:13:22.331671       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0419d76c-c241-489e-9f5f-90e9296d095a", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-l27qp
	I1212 22:13:22.341679       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f3c30449-a7a0-4a3e-82ec-2e59cccf197c", APIVersion:"batch/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-xl7kp
	I1212 22:13:25.156258       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"0419d76c-c241-489e-9f5f-90e9296d095a", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 22:13:26.159572       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f3c30449-a7a0-4a3e-82ec-2e59cccf197c", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1212 22:16:06.287997       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"d8c8ae31-2b22-4385-b1df-333816e4967d", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1212 22:16:06.292271       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"52eae2d3-1d57-4ed3-9f20-7f335db050a7", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-774d9
	E1212 22:16:28.031299       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-l9j8j" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [2d18a59252051d229797406a0d051d060ea67046db722be072622a9d0d9e254b] <==
	* W1212 22:13:05.594841       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1212 22:13:05.601831       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1212 22:13:05.601861       1 server_others.go:186] Using iptables Proxier.
	I1212 22:13:05.602106       1 server.go:583] Version: v1.18.20
	I1212 22:13:05.602507       1 config.go:133] Starting endpoints config controller
	I1212 22:13:05.602579       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1212 22:13:05.602599       1 config.go:315] Starting service config controller
	I1212 22:13:05.602681       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1212 22:13:05.702808       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1212 22:13:05.702908       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [aa0a5b4f7f98b5610c73ca27ef7e9b6ca4518dadb9709628cae29ec8a9d39023] <==
	* W1212 22:12:46.835988       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 22:12:46.835995       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 22:12:46.916469       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 22:12:46.916498       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1212 22:12:46.919013       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1212 22:12:46.919150       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 22:12:46.919197       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 22:12:46.919151       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1212 22:12:46.921085       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:12:46.921454       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 22:12:46.921625       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:12:46.921759       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:12:46.921862       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:12:46.921958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:12:46.922206       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 22:12:46.922213       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:12:46.922279       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:12:46.922317       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:12:46.922346       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 22:12:46.922422       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:12:47.917005       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:12:47.990755       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:12:47.994359       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 22:12:48.016262       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1212 22:12:50.319417       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 12 22:15:46 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:15:46.034835    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:15:46 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:15:46.034866    1867 pod_workers.go:191] Error syncing pod 342b7920-9a39-4a8a-adaf-a223e9e51f82 ("kube-ingress-dns-minikube_kube-system(342b7920-9a39-4a8a-adaf-a223e9e51f82)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 22:16:01 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:01.034732    1867 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:01 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:01.034769    1867 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:01 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:01.034813    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:01 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:01.034840    1867 pod_workers.go:191] Error syncing pod 342b7920-9a39-4a8a-adaf-a223e9e51f82 ("kube-ingress-dns-minikube_kube-system(342b7920-9a39-4a8a-adaf-a223e9e51f82)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 22:16:06 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:06.297619    1867 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 12 22:16:06 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:06.469660    1867 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-gwfqr" (UniqueName: "kubernetes.io/secret/b8e63e64-658b-4340-9b6a-b0e52b2aec05-default-token-gwfqr") pod "hello-world-app-5f5d8b66bb-774d9" (UID: "b8e63e64-658b-4340-9b6a-b0e52b2aec05")
	Dec 12 22:16:06 ingress-addon-legacy-036387 kubelet[1867]: W1212 22:16:06.656458    1867 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/157c5d141cba6e7fb78be6e8f09427c80dcf732e55b4699aad74a122a8597888/crio-27b90cd92b5724e2819458f4955427b921a70aee69505554228b5bee68945135 WatchSource:0}: Error finding container 27b90cd92b5724e2819458f4955427b921a70aee69505554228b5bee68945135: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000c79920 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Dec 12 22:16:16 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:16.034773    1867 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:16 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:16.034820    1867 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:16 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:16.034876    1867 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 12 22:16:16 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:16.034925    1867 pod_workers.go:191] Error syncing pod 342b7920-9a39-4a8a-adaf-a223e9e51f82 ("kube-ingress-dns-minikube_kube-system(342b7920-9a39-4a8a-adaf-a223e9e51f82)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 12 22:16:22 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:22.148549    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-bswlr" (UniqueName: "kubernetes.io/secret/342b7920-9a39-4a8a-adaf-a223e9e51f82-minikube-ingress-dns-token-bswlr") pod "342b7920-9a39-4a8a-adaf-a223e9e51f82" (UID: "342b7920-9a39-4a8a-adaf-a223e9e51f82")
	Dec 12 22:16:22 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:22.150868    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342b7920-9a39-4a8a-adaf-a223e9e51f82-minikube-ingress-dns-token-bswlr" (OuterVolumeSpecName: "minikube-ingress-dns-token-bswlr") pod "342b7920-9a39-4a8a-adaf-a223e9e51f82" (UID: "342b7920-9a39-4a8a-adaf-a223e9e51f82"). InnerVolumeSpecName "minikube-ingress-dns-token-bswlr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:16:22 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:22.248846    1867 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-bswlr" (UniqueName: "kubernetes.io/secret/342b7920-9a39-4a8a-adaf-a223e9e51f82-minikube-ingress-dns-token-bswlr") on node "ingress-addon-legacy-036387" DevicePath ""
	Dec 12 22:16:23 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:23.237352    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2fbw9.17a03568c485a693", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2fbw9", UID:"7cc29650-3b8e-4e97-8a0c-ef2793e4eb09", APIVersion:"v1", ResourceVersion:"460", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-036387"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15654cdce114093, ext:213574792241, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15654cdce114093, ext:213574792241, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2fbw9.17a03568c485a693" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 22:16:23 ingress-addon-legacy-036387 kubelet[1867]: E1212 22:16:23.241284    1867 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2fbw9.17a03568c485a693", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2fbw9", UID:"7cc29650-3b8e-4e97-8a0c-ef2793e4eb09", APIVersion:"v1", ResourceVersion:"460", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-036387"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15654cdce114093, ext:213574792241, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15654cdce372de0, ext:213577277820, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2fbw9.17a03568c485a693" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 12 22:16:25 ingress-addon-legacy-036387 kubelet[1867]: W1212 22:16:25.478941    1867 pod_container_deletor.go:77] Container "89d0c7434a1594b8b7e1b24cd56fc2f299183c387dde2f055d5e3f96cef4ba28" not found in pod's containers
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.157221    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-webhook-cert") pod "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09" (UID: "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09")
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.157270    1867 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-stdk5" (UniqueName: "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-ingress-nginx-token-stdk5") pod "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09" (UID: "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09")
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.159128    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-ingress-nginx-token-stdk5" (OuterVolumeSpecName: "ingress-nginx-token-stdk5") pod "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09" (UID: "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09"). InnerVolumeSpecName "ingress-nginx-token-stdk5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.159365    1867 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09" (UID: "7cc29650-3b8e-4e97-8a0c-ef2793e4eb09"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.257548    1867 reconciler.go:319] Volume detached for volume "ingress-nginx-token-stdk5" (UniqueName: "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-ingress-nginx-token-stdk5") on node "ingress-addon-legacy-036387" DevicePath ""
	Dec 12 22:16:26 ingress-addon-legacy-036387 kubelet[1867]: I1212 22:16:26.257586    1867 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7cc29650-3b8e-4e97-8a0c-ef2793e4eb09-webhook-cert") on node "ingress-addon-legacy-036387" DevicePath ""
	
	* 
	* ==> storage-provisioner [cbe69592014e1da553dfee13c2534757d694752bc8c22a262d45b8a200d6320b] <==
	* I1212 22:13:11.588458       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 22:13:11.595898       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 22:13:11.595963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 22:13:11.629236       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 22:13:11.629435       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-036387_560c8aec-c0a0-4a5f-a2f9-820f745ac89c!
	I1212 22:13:11.629376       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4036cfdc-ae84-4fa7-9168-74800d9fd3b2", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-036387_560c8aec-c0a0-4a5f-a2f9-820f745ac89c became leader
	I1212 22:13:11.729958       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-036387_560c8aec-c0a0-4a5f-a2f9-820f745ac89c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-036387 -n ingress-addon-legacy-036387
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-036387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (178.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- sh -c "ping -c 1 192.168.58.1": exit status 1 (183.435815ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-67rxw): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- sh -c "ping -c 1 192.168.58.1": exit status 1 (171.580961ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-bbwmj): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-764961
helpers_test.go:235: (dbg) docker inspect multinode-764961:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b",
	        "Created": "2023-12-12T22:21:22.240734019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102535,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:21:22.513182087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b1218b7fc47b3ed5d407fdcfdcbd5e6e1d94fe3ed762702de74973699a51be9",
	        "ResolvConfPath": "/var/lib/docker/containers/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/hosts",
	        "LogPath": "/var/lib/docker/containers/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b-json.log",
	        "Name": "/multinode-764961",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-764961:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-764961",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/801c0497a49e3f75c1c72fdb8349c182006887a9112c345f32631e5c6bdf92df-init/diff:/var/lib/docker/overlay2/315943c5fbce6bf5205163f366377908e1fa1e507321eff7fb62256fbf325087/diff",
	                "MergedDir": "/var/lib/docker/overlay2/801c0497a49e3f75c1c72fdb8349c182006887a9112c345f32631e5c6bdf92df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/801c0497a49e3f75c1c72fdb8349c182006887a9112c345f32631e5c6bdf92df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/801c0497a49e3f75c1c72fdb8349c182006887a9112c345f32631e5c6bdf92df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-764961",
	                "Source": "/var/lib/docker/volumes/multinode-764961/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-764961",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-764961",
	                "name.minikube.sigs.k8s.io": "multinode-764961",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "14ccdf16462ce3899d92ba50effc2ba5667c0ae6b24e87bc71727596cf4dd272",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/14ccdf16462c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-764961": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c9ae967ebe85",
	                        "multinode-764961"
	                    ],
	                    "NetworkID": "bd5e429e5fbc276fc253f4727c39750edce8f0300b2c58356052ee2ce664e851",
	                    "EndpointID": "3e36624ef27605ca773d1fb6ae06d6475e8b4e72e43abd10e401fad2edb9ffb2",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-764961 -n multinode-764961
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-764961 logs -n 25: (1.236834152s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-980625                           | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:20 UTC | 12 Dec 23 22:21 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-980625 ssh -- ls                    | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-962610                           | mount-start-1-962610 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-980625 ssh -- ls                    | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-980625                           | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	| start   | -p mount-start-2-980625                           | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	| ssh     | mount-start-2-980625 ssh -- ls                    | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-980625                           | mount-start-2-980625 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	| delete  | -p mount-start-1-962610                           | mount-start-1-962610 | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:21 UTC |
	| start   | -p multinode-764961                               | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:21 UTC | 12 Dec 23 22:23 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- apply -f                   | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- rollout                    | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- get pods -o                | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- get pods -o                | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-67rxw --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-bbwmj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-67rxw --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-bbwmj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-67rxw -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-bbwmj -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- get pods -o                | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-67rxw                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC |                     |
	|         | busybox-5bc68d56bd-67rxw -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC | 12 Dec 23 22:23 UTC |
	|         | busybox-5bc68d56bd-bbwmj                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-764961 -- exec                       | multinode-764961     | jenkins | v1.32.0 | 12 Dec 23 22:23 UTC |                     |
	|         | busybox-5bc68d56bd-bbwmj -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:21:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:21:16.286449  101931 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:21:16.286612  101931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:21:16.286622  101931 out.go:309] Setting ErrFile to fd 2...
	I1212 22:21:16.286630  101931 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:21:16.286840  101931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:21:16.287427  101931 out.go:303] Setting JSON to false
	I1212 22:21:16.288582  101931 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3828,"bootTime":1702415848,"procs":468,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:21:16.288644  101931 start.go:138] virtualization: kvm guest
	I1212 22:21:16.290895  101931 out.go:177] * [multinode-764961] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:21:16.292812  101931 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:21:16.292839  101931 notify.go:220] Checking for updates...
	I1212 22:21:16.295407  101931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:21:16.296713  101931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:21:16.298010  101931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:21:16.299438  101931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:21:16.300846  101931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:21:16.302403  101931 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:21:16.323398  101931 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:21:16.323507  101931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:21:16.374411  101931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-12 22:21:16.365745964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:21:16.374504  101931 docker.go:295] overlay module found
	I1212 22:21:16.376504  101931 out.go:177] * Using the docker driver based on user configuration
	I1212 22:21:16.377857  101931 start.go:298] selected driver: docker
	I1212 22:21:16.377867  101931 start.go:902] validating driver "docker" against <nil>
	I1212 22:21:16.377876  101931 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:21:16.378602  101931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:21:16.429736  101931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-12 22:21:16.421594927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:21:16.429891  101931 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:21:16.430083  101931 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 22:21:16.432091  101931 out.go:177] * Using Docker driver with root privileges
	I1212 22:21:16.433391  101931 cni.go:84] Creating CNI manager for ""
	I1212 22:21:16.433403  101931 cni.go:136] 0 nodes found, recommending kindnet
	I1212 22:21:16.433414  101931 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:21:16.433426  101931 start_flags.go:323] config:
	{Name:multinode-764961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:21:16.434828  101931 out.go:177] * Starting control plane node multinode-764961 in cluster multinode-764961
	I1212 22:21:16.436050  101931 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:21:16.437357  101931 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:21:16.438484  101931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:21:16.438513  101931 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:21:16.438521  101931 cache.go:56] Caching tarball of preloaded images
	I1212 22:21:16.438577  101931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:21:16.438593  101931 preload.go:174] Found /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:21:16.438602  101931 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:21:16.438909  101931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json ...
	I1212 22:21:16.438940  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json: {Name:mk85db037e469e78e5524b5549819d5bc83e52d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:16.454775  101931 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 22:21:16.454807  101931 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 22:21:16.454827  101931 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:21:16.454870  101931 start.go:365] acquiring machines lock for multinode-764961: {Name:mkab56f044b3d7291f05585774e1444d79990b8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:21:16.454968  101931 start.go:369] acquired machines lock for "multinode-764961" in 80.189µs
	I1212 22:21:16.454991  101931 start.go:93] Provisioning new machine with config: &{Name:multinode-764961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:21:16.455074  101931 start.go:125] createHost starting for "" (driver="docker")
	I1212 22:21:16.457025  101931 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 22:21:16.457252  101931 start.go:159] libmachine.API.Create for "multinode-764961" (driver="docker")
	I1212 22:21:16.457289  101931 client.go:168] LocalClient.Create starting
	I1212 22:21:16.457370  101931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem
	I1212 22:21:16.457412  101931 main.go:141] libmachine: Decoding PEM data...
	I1212 22:21:16.457429  101931 main.go:141] libmachine: Parsing certificate...
	I1212 22:21:16.457485  101931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem
	I1212 22:21:16.457508  101931 main.go:141] libmachine: Decoding PEM data...
	I1212 22:21:16.457519  101931 main.go:141] libmachine: Parsing certificate...
	I1212 22:21:16.457807  101931 cli_runner.go:164] Run: docker network inspect multinode-764961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 22:21:16.473295  101931 cli_runner.go:211] docker network inspect multinode-764961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 22:21:16.473369  101931 network_create.go:281] running [docker network inspect multinode-764961] to gather additional debugging logs...
	I1212 22:21:16.473388  101931 cli_runner.go:164] Run: docker network inspect multinode-764961
	W1212 22:21:16.488684  101931 cli_runner.go:211] docker network inspect multinode-764961 returned with exit code 1
	I1212 22:21:16.488710  101931 network_create.go:284] error running [docker network inspect multinode-764961]: docker network inspect multinode-764961: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-764961 not found
	I1212 22:21:16.488726  101931 network_create.go:286] output of [docker network inspect multinode-764961]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-764961 not found
	
	** /stderr **
	I1212 22:21:16.488818  101931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:21:16.504816  101931 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-29b572e761a6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9c:5a:7c:4d} reservation:<nil>}
	I1212 22:21:16.505212  101931 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002128530}
	I1212 22:21:16.505240  101931 network_create.go:124] attempt to create docker network multinode-764961 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1212 22:21:16.505282  101931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-764961 multinode-764961
	I1212 22:21:16.556672  101931 network_create.go:108] docker network multinode-764961 192.168.58.0/24 created
	I1212 22:21:16.556711  101931 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-764961" container
	I1212 22:21:16.556764  101931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 22:21:16.572003  101931 cli_runner.go:164] Run: docker volume create multinode-764961 --label name.minikube.sigs.k8s.io=multinode-764961 --label created_by.minikube.sigs.k8s.io=true
	I1212 22:21:16.590038  101931 oci.go:103] Successfully created a docker volume multinode-764961
	I1212 22:21:16.590113  101931 cli_runner.go:164] Run: docker run --rm --name multinode-764961-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-764961 --entrypoint /usr/bin/test -v multinode-764961:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 22:21:17.076951  101931 oci.go:107] Successfully prepared a docker volume multinode-764961
	I1212 22:21:17.076993  101931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:21:17.077018  101931 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 22:21:17.077084  101931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-764961:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 22:21:22.177201  101931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-764961:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (5.100071758s)
	I1212 22:21:22.177237  101931 kic.go:203] duration metric: took 5.100217 seconds to extract preloaded images to volume
	W1212 22:21:22.177380  101931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 22:21:22.177495  101931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 22:21:22.226846  101931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-764961 --name multinode-764961 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-764961 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-764961 --network multinode-764961 --ip 192.168.58.2 --volume multinode-764961:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:21:22.520596  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Running}}
	I1212 22:21:22.537275  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:22.555671  101931 cli_runner.go:164] Run: docker exec multinode-764961 stat /var/lib/dpkg/alternatives/iptables
	I1212 22:21:22.594937  101931 oci.go:144] the created container "multinode-764961" has a running status.
	I1212 22:21:22.594968  101931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa...
	I1212 22:21:22.659719  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 22:21:22.659765  101931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 22:21:22.680863  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:22.697381  101931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 22:21:22.697405  101931 kic_runner.go:114] Args: [docker exec --privileged multinode-764961 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 22:21:22.753432  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:22.769959  101931 machine.go:88] provisioning docker machine ...
	I1212 22:21:22.770002  101931 ubuntu.go:169] provisioning hostname "multinode-764961"
	I1212 22:21:22.770068  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:22.787878  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:21:22.788228  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1212 22:21:22.788243  101931 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-764961 && echo "multinode-764961" | sudo tee /etc/hostname
	I1212 22:21:22.789012  101931 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41154->127.0.0.1:32847: read: connection reset by peer
	I1212 22:21:25.921094  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-764961
	
	I1212 22:21:25.921187  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:25.936924  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:21:25.937254  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1212 22:21:25.937273  101931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-764961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-764961/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-764961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:21:26.055262  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:21:26.055290  101931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:21:26.055322  101931 ubuntu.go:177] setting up certificates
	I1212 22:21:26.055334  101931 provision.go:83] configureAuth start
	I1212 22:21:26.055382  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961
	I1212 22:21:26.070989  101931 provision.go:138] copyHostCerts
	I1212 22:21:26.071028  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:21:26.071061  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem, removing ...
	I1212 22:21:26.071074  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:21:26.071142  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:21:26.071227  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:21:26.071250  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem, removing ...
	I1212 22:21:26.071257  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:21:26.071296  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:21:26.071362  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:21:26.071404  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem, removing ...
	I1212 22:21:26.071417  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:21:26.071470  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:21:26.071607  101931 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.multinode-764961 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-764961]
	I1212 22:21:26.138259  101931 provision.go:172] copyRemoteCerts
	I1212 22:21:26.138314  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:21:26.138349  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.154520  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:26.243572  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:21:26.243642  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:21:26.264110  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:21:26.264166  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 22:21:26.284376  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:21:26.284441  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:21:26.305021  101931 provision.go:86] duration metric: configureAuth took 249.672985ms
	I1212 22:21:26.305046  101931 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:21:26.305203  101931 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:21:26.305299  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.321776  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:21:26.322111  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1212 22:21:26.322129  101931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:21:26.524200  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:21:26.524225  101931 machine.go:91] provisioned docker machine in 3.75423864s
	I1212 22:21:26.524237  101931 client.go:171] LocalClient.Create took 10.066939279s
	I1212 22:21:26.524259  101931 start.go:167] duration metric: libmachine.API.Create for "multinode-764961" took 10.067006661s
	I1212 22:21:26.524268  101931 start.go:300] post-start starting for "multinode-764961" (driver="docker")
	I1212 22:21:26.524283  101931 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:21:26.524350  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:21:26.524396  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.540948  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:26.628087  101931 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:21:26.630956  101931 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1212 22:21:26.630979  101931 command_runner.go:130] > NAME="Ubuntu"
	I1212 22:21:26.630984  101931 command_runner.go:130] > VERSION_ID="22.04"
	I1212 22:21:26.630991  101931 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1212 22:21:26.630999  101931 command_runner.go:130] > VERSION_CODENAME=jammy
	I1212 22:21:26.631004  101931 command_runner.go:130] > ID=ubuntu
	I1212 22:21:26.631011  101931 command_runner.go:130] > ID_LIKE=debian
	I1212 22:21:26.631018  101931 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1212 22:21:26.631027  101931 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1212 22:21:26.631038  101931 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1212 22:21:26.631056  101931 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1212 22:21:26.631071  101931 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1212 22:21:26.631131  101931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:21:26.631159  101931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:21:26.631168  101931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:21:26.631178  101931 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 22:21:26.631194  101931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:21:26.631249  101931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:21:26.631335  101931 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> 163992.pem in /etc/ssl/certs
	I1212 22:21:26.631346  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /etc/ssl/certs/163992.pem
	I1212 22:21:26.631447  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:21:26.638710  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:21:26.659292  101931 start.go:303] post-start completed in 135.008285ms
	I1212 22:21:26.659623  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961
	I1212 22:21:26.675801  101931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json ...
	I1212 22:21:26.676032  101931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:21:26.676071  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.691874  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:26.775841  101931 command_runner.go:130] > 20%!
	(MISSING)I1212 22:21:26.776039  101931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:21:26.779905  101931 command_runner.go:130] > 233G
	I1212 22:21:26.779942  101931 start.go:128] duration metric: createHost completed in 10.324851757s
	I1212 22:21:26.779957  101931 start.go:83] releasing machines lock for "multinode-764961", held for 10.324977501s
	I1212 22:21:26.780014  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961
	I1212 22:21:26.795609  101931 ssh_runner.go:195] Run: cat /version.json
	I1212 22:21:26.795648  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.795683  101931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:21:26.795745  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:26.812087  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:26.813114  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:26.894936  101931 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "75a4d7cfa55ef6339c3085d6042e756469710034"}
	I1212 22:21:26.895081  101931 ssh_runner.go:195] Run: systemctl --version
	I1212 22:21:26.985681  101931 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:21:26.987740  101931 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1212 22:21:26.987777  101931 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 22:21:26.987880  101931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:21:27.122782  101931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:21:27.126566  101931 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1212 22:21:27.126597  101931 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1212 22:21:27.126603  101931 command_runner.go:130] > Device: 37h/55d	Inode: 570028      Links: 1
	I1212 22:21:27.126611  101931 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:21:27.126626  101931 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1212 22:21:27.126638  101931 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1212 22:21:27.126649  101931 command_runner.go:130] > Change: 2023-12-12 22:02:56.298816728 +0000
	I1212 22:21:27.126659  101931 command_runner.go:130] >  Birth: 2023-12-12 22:02:56.298816728 +0000
	I1212 22:21:27.126752  101931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:21:27.143259  101931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:21:27.143320  101931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:21:27.169555  101931 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1212 22:21:27.169608  101931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 22:21:27.169619  101931 start.go:475] detecting cgroup driver to use...
	I1212 22:21:27.169653  101931 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:21:27.169689  101931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:21:27.182906  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:21:27.192319  101931 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:21:27.192389  101931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:21:27.203882  101931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:21:27.215613  101931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:21:27.281487  101931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:21:27.353369  101931 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 22:21:27.353394  101931 docker.go:219] disabling docker service ...
	I1212 22:21:27.353429  101931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:21:27.370029  101931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:21:27.379806  101931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:21:27.389813  101931 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 22:21:27.452788  101931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:21:27.463384  101931 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 22:21:27.521886  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:21:27.531681  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:21:27.544555  101931 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:21:27.545275  101931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:21:27.545336  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:21:27.553437  101931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:21:27.553495  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:21:27.561468  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:21:27.569482  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:21:27.577903  101931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:21:27.585633  101931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:21:27.592063  101931 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 22:21:27.592678  101931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:21:27.599667  101931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:21:27.667801  101931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:21:27.764146  101931 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:21:27.764221  101931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:21:27.767396  101931 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:21:27.767423  101931 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:21:27.767434  101931 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1212 22:21:27.767445  101931 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:21:27.767459  101931 command_runner.go:130] > Access: 2023-12-12 22:21:27.748047584 +0000
	I1212 22:21:27.767472  101931 command_runner.go:130] > Modify: 2023-12-12 22:21:27.748047584 +0000
	I1212 22:21:27.767484  101931 command_runner.go:130] > Change: 2023-12-12 22:21:27.748047584 +0000
	I1212 22:21:27.767491  101931 command_runner.go:130] >  Birth: -
	I1212 22:21:27.767513  101931 start.go:543] Will wait 60s for crictl version
	I1212 22:21:27.767545  101931 ssh_runner.go:195] Run: which crictl
	I1212 22:21:27.770494  101931 command_runner.go:130] > /usr/bin/crictl
	I1212 22:21:27.770550  101931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:21:27.800940  101931 command_runner.go:130] > Version:  0.1.0
	I1212 22:21:27.800968  101931 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:21:27.800973  101931 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1212 22:21:27.800979  101931 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:21:27.800996  101931 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 22:21:27.801061  101931 ssh_runner.go:195] Run: crio --version
	I1212 22:21:27.832907  101931 command_runner.go:130] > crio version 1.24.6
	I1212 22:21:27.832934  101931 command_runner.go:130] > Version:          1.24.6
	I1212 22:21:27.832941  101931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 22:21:27.832945  101931 command_runner.go:130] > GitTreeState:     clean
	I1212 22:21:27.832951  101931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 22:21:27.832956  101931 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 22:21:27.832960  101931 command_runner.go:130] > Compiler:         gc
	I1212 22:21:27.832964  101931 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:21:27.832969  101931 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:21:27.832976  101931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:21:27.832986  101931 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:21:27.832994  101931 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:21:27.833059  101931 ssh_runner.go:195] Run: crio --version
	I1212 22:21:27.867626  101931 command_runner.go:130] > crio version 1.24.6
	I1212 22:21:27.867650  101931 command_runner.go:130] > Version:          1.24.6
	I1212 22:21:27.867661  101931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 22:21:27.867669  101931 command_runner.go:130] > GitTreeState:     clean
	I1212 22:21:27.867678  101931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 22:21:27.867686  101931 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 22:21:27.867691  101931 command_runner.go:130] > Compiler:         gc
	I1212 22:21:27.867696  101931 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:21:27.867701  101931 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:21:27.867710  101931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:21:27.867717  101931 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:21:27.867722  101931 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:21:27.869725  101931 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 22:21:27.871286  101931 cli_runner.go:164] Run: docker network inspect multinode-764961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:21:27.887599  101931 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1212 22:21:27.891100  101931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:21:27.900678  101931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:21:27.900724  101931 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:21:27.951370  101931 command_runner.go:130] > {
	I1212 22:21:27.951396  101931 command_runner.go:130] >   "images": [
	I1212 22:21:27.951405  101931 command_runner.go:130] >     {
	I1212 22:21:27.951419  101931 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 22:21:27.951428  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.951442  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 22:21:27.951453  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951460  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.951484  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 22:21:27.951502  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 22:21:27.951513  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951525  101931 command_runner.go:130] >       "size": "65258016",
	I1212 22:21:27.951540  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.951567  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.951578  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.951586  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.951593  101931 command_runner.go:130] >     },
	I1212 22:21:27.951600  101931 command_runner.go:130] >     {
	I1212 22:21:27.951611  101931 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 22:21:27.951623  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.951630  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 22:21:27.951637  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951641  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.951652  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 22:21:27.951660  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 22:21:27.951667  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951677  101931 command_runner.go:130] >       "size": "31470524",
	I1212 22:21:27.951684  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.951688  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.951695  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.951707  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.951714  101931 command_runner.go:130] >     },
	I1212 22:21:27.951718  101931 command_runner.go:130] >     {
	I1212 22:21:27.951727  101931 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 22:21:27.951734  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.951740  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 22:21:27.951746  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951751  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.951761  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 22:21:27.951770  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 22:21:27.951777  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951782  101931 command_runner.go:130] >       "size": "53621675",
	I1212 22:21:27.951788  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.951792  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.951799  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.951803  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.951807  101931 command_runner.go:130] >     },
	I1212 22:21:27.951811  101931 command_runner.go:130] >     {
	I1212 22:21:27.951824  101931 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 22:21:27.951831  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.951837  101931 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 22:21:27.951840  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951847  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.951854  101931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 22:21:27.951864  101931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 22:21:27.951878  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951886  101931 command_runner.go:130] >       "size": "295456551",
	I1212 22:21:27.951890  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.951897  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.951901  101931 command_runner.go:130] >       },
	I1212 22:21:27.951905  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.951911  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.951916  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.951922  101931 command_runner.go:130] >     },
	I1212 22:21:27.951926  101931 command_runner.go:130] >     {
	I1212 22:21:27.951934  101931 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 22:21:27.951945  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.951953  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 22:21:27.951960  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951964  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.951974  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 22:21:27.951984  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 22:21:27.951991  101931 command_runner.go:130] >       ],
	I1212 22:21:27.951995  101931 command_runner.go:130] >       "size": "127226832",
	I1212 22:21:27.951999  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.952005  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.952012  101931 command_runner.go:130] >       },
	I1212 22:21:27.952016  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.952023  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.952027  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.952033  101931 command_runner.go:130] >     },
	I1212 22:21:27.952037  101931 command_runner.go:130] >     {
	I1212 22:21:27.952047  101931 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 22:21:27.952053  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.952064  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 22:21:27.952071  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952076  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.952086  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 22:21:27.952100  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 22:21:27.952107  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952112  101931 command_runner.go:130] >       "size": "123261750",
	I1212 22:21:27.952118  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.952123  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.952129  101931 command_runner.go:130] >       },
	I1212 22:21:27.952134  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.952140  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.952144  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.952151  101931 command_runner.go:130] >     },
	I1212 22:21:27.952155  101931 command_runner.go:130] >     {
	I1212 22:21:27.952163  101931 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 22:21:27.952170  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.952175  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 22:21:27.952184  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952191  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.952199  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 22:21:27.952208  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 22:21:27.952214  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952219  101931 command_runner.go:130] >       "size": "74749335",
	I1212 22:21:27.952225  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.952230  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.952237  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.952242  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.952248  101931 command_runner.go:130] >     },
	I1212 22:21:27.952252  101931 command_runner.go:130] >     {
	I1212 22:21:27.952260  101931 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 22:21:27.952267  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.952273  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 22:21:27.952279  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952283  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.952345  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 22:21:27.952368  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 22:21:27.952377  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952382  101931 command_runner.go:130] >       "size": "61551410",
	I1212 22:21:27.952387  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.952391  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.952398  101931 command_runner.go:130] >       },
	I1212 22:21:27.952402  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.952406  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.952413  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.952417  101931 command_runner.go:130] >     },
	I1212 22:21:27.952423  101931 command_runner.go:130] >     {
	I1212 22:21:27.952431  101931 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 22:21:27.952441  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.952451  101931 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 22:21:27.952461  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952469  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.952484  101931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 22:21:27.952496  101931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 22:21:27.952508  101931 command_runner.go:130] >       ],
	I1212 22:21:27.952516  101931 command_runner.go:130] >       "size": "750414",
	I1212 22:21:27.952520  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.952527  101931 command_runner.go:130] >         "value": "65535"
	I1212 22:21:27.952531  101931 command_runner.go:130] >       },
	I1212 22:21:27.952537  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.952541  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.952546  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.952549  101931 command_runner.go:130] >     }
	I1212 22:21:27.952555  101931 command_runner.go:130] >   ]
	I1212 22:21:27.952559  101931 command_runner.go:130] > }
	I1212 22:21:27.953546  101931 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:21:27.953565  101931 crio.go:415] Images already preloaded, skipping extraction
	I1212 22:21:27.953616  101931 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 22:21:27.985509  101931 command_runner.go:130] > {
	I1212 22:21:27.985531  101931 command_runner.go:130] >   "images": [
	I1212 22:21:27.985536  101931 command_runner.go:130] >     {
	I1212 22:21:27.985546  101931 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1212 22:21:27.985552  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.985562  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1212 22:21:27.985568  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985575  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.985593  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1212 22:21:27.985605  101931 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1212 22:21:27.985614  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985623  101931 command_runner.go:130] >       "size": "65258016",
	I1212 22:21:27.985629  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.985633  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.985640  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.985647  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.985656  101931 command_runner.go:130] >     },
	I1212 22:21:27.985663  101931 command_runner.go:130] >     {
	I1212 22:21:27.985673  101931 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 22:21:27.985680  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.985689  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 22:21:27.985703  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985709  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.985717  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 22:21:27.985728  101931 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 22:21:27.985734  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985755  101931 command_runner.go:130] >       "size": "31470524",
	I1212 22:21:27.985766  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.985778  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.985787  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.985794  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.985803  101931 command_runner.go:130] >     },
	I1212 22:21:27.985808  101931 command_runner.go:130] >     {
	I1212 22:21:27.985817  101931 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1212 22:21:27.985829  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.985838  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1212 22:21:27.985844  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985852  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.985867  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1212 22:21:27.985886  101931 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1212 22:21:27.985895  101931 command_runner.go:130] >       ],
	I1212 22:21:27.985903  101931 command_runner.go:130] >       "size": "53621675",
	I1212 22:21:27.985911  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.985915  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.985924  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.985935  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.985941  101931 command_runner.go:130] >     },
	I1212 22:21:27.985950  101931 command_runner.go:130] >     {
	I1212 22:21:27.985961  101931 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1212 22:21:27.985979  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.985990  101931 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1212 22:21:27.985999  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986004  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986016  101931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1212 22:21:27.986031  101931 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1212 22:21:27.986048  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986058  101931 command_runner.go:130] >       "size": "295456551",
	I1212 22:21:27.986068  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.986078  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.986084  101931 command_runner.go:130] >       },
	I1212 22:21:27.986092  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986096  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986104  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986113  101931 command_runner.go:130] >     },
	I1212 22:21:27.986120  101931 command_runner.go:130] >     {
	I1212 22:21:27.986134  101931 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1212 22:21:27.986143  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.986152  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1212 22:21:27.986164  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986174  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986182  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1212 22:21:27.986197  101931 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1212 22:21:27.986207  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986214  101931 command_runner.go:130] >       "size": "127226832",
	I1212 22:21:27.986224  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.986234  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.986244  101931 command_runner.go:130] >       },
	I1212 22:21:27.986251  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986260  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986267  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986275  101931 command_runner.go:130] >     },
	I1212 22:21:27.986280  101931 command_runner.go:130] >     {
	I1212 22:21:27.986287  101931 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1212 22:21:27.986297  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.986310  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1212 22:21:27.986316  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986327  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986357  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1212 22:21:27.986373  101931 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1212 22:21:27.986380  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986384  101931 command_runner.go:130] >       "size": "123261750",
	I1212 22:21:27.986394  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.986404  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.986416  101931 command_runner.go:130] >       },
	I1212 22:21:27.986426  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986433  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986443  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986449  101931 command_runner.go:130] >     },
	I1212 22:21:27.986458  101931 command_runner.go:130] >     {
	I1212 22:21:27.986468  101931 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1212 22:21:27.986476  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.986482  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1212 22:21:27.986490  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986498  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986513  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1212 22:21:27.986529  101931 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1212 22:21:27.986538  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986545  101931 command_runner.go:130] >       "size": "74749335",
	I1212 22:21:27.986555  101931 command_runner.go:130] >       "uid": null,
	I1212 22:21:27.986562  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986571  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986579  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986587  101931 command_runner.go:130] >     },
	I1212 22:21:27.986594  101931 command_runner.go:130] >     {
	I1212 22:21:27.986610  101931 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1212 22:21:27.986625  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.986637  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1212 22:21:27.986646  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986656  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986688  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1212 22:21:27.986704  101931 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1212 22:21:27.986714  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986721  101931 command_runner.go:130] >       "size": "61551410",
	I1212 22:21:27.986731  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.986738  101931 command_runner.go:130] >         "value": "0"
	I1212 22:21:27.986746  101931 command_runner.go:130] >       },
	I1212 22:21:27.986754  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986761  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986765  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986776  101931 command_runner.go:130] >     },
	I1212 22:21:27.986784  101931 command_runner.go:130] >     {
	I1212 22:21:27.986809  101931 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1212 22:21:27.986819  101931 command_runner.go:130] >       "repoTags": [
	I1212 22:21:27.986831  101931 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1212 22:21:27.986840  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986846  101931 command_runner.go:130] >       "repoDigests": [
	I1212 22:21:27.986857  101931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1212 22:21:27.986872  101931 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1212 22:21:27.986886  101931 command_runner.go:130] >       ],
	I1212 22:21:27.986893  101931 command_runner.go:130] >       "size": "750414",
	I1212 22:21:27.986903  101931 command_runner.go:130] >       "uid": {
	I1212 22:21:27.986910  101931 command_runner.go:130] >         "value": "65535"
	I1212 22:21:27.986919  101931 command_runner.go:130] >       },
	I1212 22:21:27.986926  101931 command_runner.go:130] >       "username": "",
	I1212 22:21:27.986936  101931 command_runner.go:130] >       "spec": null,
	I1212 22:21:27.986943  101931 command_runner.go:130] >       "pinned": false
	I1212 22:21:27.986951  101931 command_runner.go:130] >     }
	I1212 22:21:27.986957  101931 command_runner.go:130] >   ]
	I1212 22:21:27.986965  101931 command_runner.go:130] > }
	I1212 22:21:27.987083  101931 crio.go:496] all images are preloaded for cri-o runtime.
	I1212 22:21:27.987095  101931 cache_images.go:84] Images are preloaded, skipping loading
	I1212 22:21:27.987162  101931 ssh_runner.go:195] Run: crio config
	I1212 22:21:28.025212  101931 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:21:28.025253  101931 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:21:28.025265  101931 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:21:28.025272  101931 command_runner.go:130] > #
	I1212 22:21:28.025285  101931 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:21:28.025296  101931 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:21:28.025314  101931 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:21:28.025329  101931 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:21:28.025335  101931 command_runner.go:130] > # reload'.
	I1212 22:21:28.025346  101931 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:21:28.025361  101931 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:21:28.025374  101931 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:21:28.025391  101931 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:21:28.025402  101931 command_runner.go:130] > [crio]
	I1212 22:21:28.025413  101931 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:21:28.025425  101931 command_runner.go:130] > # containers images, in this directory.
	I1212 22:21:28.025440  101931 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 22:21:28.025456  101931 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:21:28.025465  101931 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1212 22:21:28.025483  101931 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:21:28.025499  101931 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:21:28.025507  101931 command_runner.go:130] > # storage_driver = "vfs"
	I1212 22:21:28.025517  101931 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:21:28.025530  101931 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:21:28.025541  101931 command_runner.go:130] > # storage_option = [
	I1212 22:21:28.025551  101931 command_runner.go:130] > # ]
	I1212 22:21:28.025562  101931 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:21:28.025576  101931 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:21:28.025584  101931 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:21:28.025595  101931 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:21:28.025613  101931 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:21:28.025625  101931 command_runner.go:130] > # always happen on a node reboot
	I1212 22:21:28.025634  101931 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:21:28.025647  101931 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:21:28.025660  101931 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:21:28.025686  101931 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:21:28.025699  101931 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:21:28.025715  101931 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:21:28.025733  101931 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:21:28.025741  101931 command_runner.go:130] > # internal_wipe = true
	I1212 22:21:28.025751  101931 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:21:28.025766  101931 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:21:28.025776  101931 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:21:28.025789  101931 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:21:28.025807  101931 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:21:28.025818  101931 command_runner.go:130] > [crio.api]
	I1212 22:21:28.025827  101931 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:21:28.025845  101931 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:21:28.025893  101931 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:21:28.025907  101931 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:21:28.025919  101931 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:21:28.025932  101931 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:21:28.025943  101931 command_runner.go:130] > # stream_port = "0"
	I1212 22:21:28.025954  101931 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:21:28.025964  101931 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:21:28.025975  101931 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:21:28.025987  101931 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:21:28.025999  101931 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:21:28.026009  101931 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:21:28.026020  101931 command_runner.go:130] > # minutes.
	I1212 22:21:28.026028  101931 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:21:28.026042  101931 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:21:28.026057  101931 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:21:28.026067  101931 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:21:28.026078  101931 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:21:28.026092  101931 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:21:28.026109  101931 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:21:28.026120  101931 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:21:28.026137  101931 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:21:28.026147  101931 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 22:21:28.026160  101931 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:21:28.026169  101931 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 22:21:28.026271  101931 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:21:28.026289  101931 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:21:28.026295  101931 command_runner.go:130] > [crio.runtime]
	I1212 22:21:28.026304  101931 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:21:28.026312  101931 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:21:28.026324  101931 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:21:28.026370  101931 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:21:28.026381  101931 command_runner.go:130] > # default_ulimits = [
	I1212 22:21:28.026386  101931 command_runner.go:130] > # ]
	I1212 22:21:28.026397  101931 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:21:28.026411  101931 command_runner.go:130] > # no_pivot = false
	I1212 22:21:28.026425  101931 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:21:28.026446  101931 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:21:28.026459  101931 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:21:28.026472  101931 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:21:28.026484  101931 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:21:28.026499  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:21:28.026509  101931 command_runner.go:130] > # conmon = ""
	I1212 22:21:28.026519  101931 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:21:28.026532  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:21:28.026543  101931 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:21:28.026557  101931 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:21:28.026569  101931 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:21:28.026584  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:21:28.026593  101931 command_runner.go:130] > # conmon_env = [
	I1212 22:21:28.026601  101931 command_runner.go:130] > # ]
	I1212 22:21:28.026614  101931 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:21:28.026626  101931 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:21:28.026640  101931 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:21:28.026650  101931 command_runner.go:130] > # default_env = [
	I1212 22:21:28.026662  101931 command_runner.go:130] > # ]
	I1212 22:21:28.026673  101931 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:21:28.026682  101931 command_runner.go:130] > # selinux = false
	I1212 22:21:28.026694  101931 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:21:28.026712  101931 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:21:28.026725  101931 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:21:28.026735  101931 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:21:28.026746  101931 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:21:28.026759  101931 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:21:28.026773  101931 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:21:28.026784  101931 command_runner.go:130] > # which might increase security.
	I1212 22:21:28.026795  101931 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1212 22:21:28.026810  101931 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:21:28.026824  101931 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:21:28.026837  101931 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:21:28.026849  101931 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:21:28.026860  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:21:28.026868  101931 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:21:28.026891  101931 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:21:28.026902  101931 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:21:28.026911  101931 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:21:28.026923  101931 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:21:28.026930  101931 command_runner.go:130] > # irqbalance daemon.
	I1212 22:21:28.026942  101931 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:21:28.026956  101931 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:21:28.026968  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:21:28.026978  101931 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:21:28.026991  101931 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:21:28.027002  101931 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:21:28.027015  101931 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:21:28.027026  101931 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:21:28.027038  101931 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:21:28.027052  101931 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:21:28.027062  101931 command_runner.go:130] > # will be added.
	I1212 22:21:28.027072  101931 command_runner.go:130] > # default_capabilities = [
	I1212 22:21:28.027079  101931 command_runner.go:130] > # 	"CHOWN",
	I1212 22:21:28.027092  101931 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:21:28.027102  101931 command_runner.go:130] > # 	"FSETID",
	I1212 22:21:28.027112  101931 command_runner.go:130] > # 	"FOWNER",
	I1212 22:21:28.027119  101931 command_runner.go:130] > # 	"SETGID",
	I1212 22:21:28.027128  101931 command_runner.go:130] > # 	"SETUID",
	I1212 22:21:28.027135  101931 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:21:28.027146  101931 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:21:28.027156  101931 command_runner.go:130] > # 	"KILL",
	I1212 22:21:28.027162  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027178  101931 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 22:21:28.027193  101931 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 22:21:28.027204  101931 command_runner.go:130] > # add_inheritable_capabilities = true
	I1212 22:21:28.027215  101931 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:21:28.027228  101931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:21:28.027239  101931 command_runner.go:130] > # default_sysctls = [
	I1212 22:21:28.027247  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027256  101931 command_runner.go:130] > # List of devices on the host that a
	I1212 22:21:28.027273  101931 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:21:28.027287  101931 command_runner.go:130] > # allowed_devices = [
	I1212 22:21:28.027297  101931 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:21:28.027305  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027315  101931 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:21:28.027405  101931 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:21:28.027420  101931 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:21:28.027444  101931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:21:28.027458  101931 command_runner.go:130] > # additional_devices = [
	I1212 22:21:28.027467  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027477  101931 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:21:28.027487  101931 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:21:28.027497  101931 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:21:28.027505  101931 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:21:28.027513  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027525  101931 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:21:28.027539  101931 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:21:28.027560  101931 command_runner.go:130] > # Defaults to false.
	I1212 22:21:28.027578  101931 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:21:28.027595  101931 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:21:28.027609  101931 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:21:28.027619  101931 command_runner.go:130] > # hooks_dir = [
	I1212 22:21:28.027629  101931 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:21:28.027636  101931 command_runner.go:130] > # ]
	I1212 22:21:28.027650  101931 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:21:28.027664  101931 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:21:28.027676  101931 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:21:28.027685  101931 command_runner.go:130] > #
	I1212 22:21:28.027697  101931 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:21:28.027711  101931 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:21:28.027723  101931 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:21:28.027730  101931 command_runner.go:130] > #
	I1212 22:21:28.027741  101931 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:21:28.027755  101931 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:21:28.027769  101931 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:21:28.027781  101931 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:21:28.027790  101931 command_runner.go:130] > #
	I1212 22:21:28.027803  101931 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:21:28.027815  101931 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:21:28.027827  101931 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:21:28.027837  101931 command_runner.go:130] > # pids_limit = 0
	I1212 22:21:28.027848  101931 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:21:28.027861  101931 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:21:28.027875  101931 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:21:28.027892  101931 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:21:28.027901  101931 command_runner.go:130] > # log_size_max = -1
	I1212 22:21:28.027914  101931 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:21:28.027927  101931 command_runner.go:130] > # log_to_journald = false
	I1212 22:21:28.027942  101931 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:21:28.027954  101931 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:21:28.027966  101931 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:21:28.027978  101931 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:21:28.027990  101931 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:21:28.027998  101931 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:21:28.028011  101931 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:21:28.028025  101931 command_runner.go:130] > # read_only = false
	I1212 22:21:28.028039  101931 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:21:28.028070  101931 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:21:28.028080  101931 command_runner.go:130] > # live configuration reload.
	I1212 22:21:28.028088  101931 command_runner.go:130] > # log_level = "info"
	I1212 22:21:28.028101  101931 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:21:28.028112  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:21:28.028123  101931 command_runner.go:130] > # log_filter = ""
	I1212 22:21:28.028137  101931 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:21:28.028151  101931 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:21:28.028161  101931 command_runner.go:130] > # separated by comma.
	I1212 22:21:28.028170  101931 command_runner.go:130] > # uid_mappings = ""
	I1212 22:21:28.028183  101931 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:21:28.028196  101931 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:21:28.028207  101931 command_runner.go:130] > # separated by comma.
	I1212 22:21:28.028215  101931 command_runner.go:130] > # gid_mappings = ""
	I1212 22:21:28.028228  101931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:21:28.028240  101931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:21:28.028253  101931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:21:28.028260  101931 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:21:28.028268  101931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:21:28.028276  101931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:21:28.028285  101931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:21:28.028291  101931 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:21:28.028300  101931 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:21:28.028308  101931 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:21:28.028317  101931 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:21:28.028325  101931 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:21:28.028344  101931 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:21:28.028362  101931 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:21:28.028379  101931 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:21:28.028391  101931 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:21:28.028402  101931 command_runner.go:130] > # drop_infra_ctr = true
	I1212 22:21:28.028414  101931 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:21:28.028426  101931 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:21:28.028439  101931 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:21:28.028452  101931 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:21:28.028466  101931 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:21:28.028478  101931 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:21:28.028489  101931 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:21:28.028504  101931 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:21:28.028513  101931 command_runner.go:130] > # pinns_path = ""
	I1212 22:21:28.028530  101931 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:21:28.028544  101931 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:21:28.028558  101931 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:21:28.028569  101931 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:21:28.028579  101931 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:21:28.028594  101931 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:21:28.028612  101931 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:21:28.028624  101931 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:21:28.028641  101931 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:21:28.028653  101931 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:21:28.028664  101931 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:21:28.028673  101931 command_runner.go:130] > # ]
	I1212 22:21:28.028688  101931 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:21:28.028701  101931 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:21:28.028714  101931 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:21:28.028728  101931 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:21:28.028737  101931 command_runner.go:130] > #
	I1212 22:21:28.028746  101931 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:21:28.028758  101931 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:21:28.028768  101931 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:21:28.028779  101931 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:21:28.028791  101931 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:21:28.028799  101931 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:21:28.028809  101931 command_runner.go:130] > # Where:
	I1212 22:21:28.028820  101931 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:21:28.028837  101931 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:21:28.028851  101931 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:21:28.028864  101931 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:21:28.028872  101931 command_runner.go:130] > #   in $PATH.
	I1212 22:21:28.028885  101931 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:21:28.028900  101931 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:21:28.028914  101931 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:21:28.028924  101931 command_runner.go:130] > #   state.
	I1212 22:21:28.028937  101931 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:21:28.028950  101931 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:21:28.028961  101931 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:21:28.028974  101931 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:21:28.028988  101931 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:21:28.029003  101931 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:21:28.029015  101931 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:21:28.029029  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:21:28.029044  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:21:28.029057  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:21:28.029075  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:21:28.029091  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:21:28.029105  101931 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:21:28.029119  101931 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:21:28.029133  101931 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:21:28.029147  101931 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:21:28.029158  101931 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:21:28.029168  101931 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1212 22:21:28.029178  101931 command_runner.go:130] > runtime_type = "oci"
	I1212 22:21:28.029189  101931 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:21:28.029199  101931 command_runner.go:130] > runtime_config_path = ""
	I1212 22:21:28.029208  101931 command_runner.go:130] > monitor_path = ""
	I1212 22:21:28.029217  101931 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:21:28.029225  101931 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:21:28.029288  101931 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:21:28.029298  101931 command_runner.go:130] > # running containers
	I1212 22:21:28.029305  101931 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:21:28.029316  101931 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:21:28.029330  101931 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:21:28.029348  101931 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:21:28.029360  101931 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:21:28.029371  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:21:28.029382  101931 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:21:28.029396  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:21:28.029408  101931 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:21:28.029419  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:21:28.029433  101931 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:21:28.029445  101931 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:21:28.029459  101931 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:21:28.029480  101931 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:21:28.029500  101931 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:21:28.029514  101931 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:21:28.029533  101931 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:21:28.029549  101931 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:21:28.029562  101931 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:21:28.029574  101931 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:21:28.029584  101931 command_runner.go:130] > # Example:
	I1212 22:21:28.029594  101931 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:21:28.029606  101931 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:21:28.029618  101931 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:21:28.029630  101931 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:21:28.029643  101931 command_runner.go:130] > # cpuset = 0
	I1212 22:21:28.029653  101931 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:21:28.029663  101931 command_runner.go:130] > # Where:
	I1212 22:21:28.029672  101931 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:21:28.029691  101931 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:21:28.029704  101931 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:21:28.029717  101931 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:21:28.029733  101931 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:21:28.029746  101931 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:21:28.029755  101931 command_runner.go:130] > # 
	I1212 22:21:28.029767  101931 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:21:28.029775  101931 command_runner.go:130] > #
	I1212 22:21:28.029789  101931 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:21:28.029803  101931 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:21:28.029817  101931 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:21:28.029832  101931 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:21:28.029844  101931 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:21:28.029854  101931 command_runner.go:130] > [crio.image]
	I1212 22:21:28.029870  101931 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:21:28.029881  101931 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:21:28.029895  101931 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:21:28.029908  101931 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:21:28.029918  101931 command_runner.go:130] > # global_auth_file = ""
	I1212 22:21:28.029929  101931 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:21:28.029941  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:21:28.029953  101931 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:21:28.029967  101931 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:21:28.029980  101931 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:21:28.029992  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:21:28.030000  101931 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:21:28.030019  101931 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:21:28.030033  101931 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:21:28.030047  101931 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:21:28.030060  101931 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:21:28.030071  101931 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:21:28.030084  101931 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:21:28.030102  101931 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:21:28.030116  101931 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:21:28.030128  101931 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:21:28.030138  101931 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:21:28.030148  101931 command_runner.go:130] > # signature_policy = ""
	I1212 22:21:28.030168  101931 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:21:28.030182  101931 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:21:28.030195  101931 command_runner.go:130] > # changing them here.
	I1212 22:21:28.030206  101931 command_runner.go:130] > # insecure_registries = [
	I1212 22:21:28.030213  101931 command_runner.go:130] > # ]
	I1212 22:21:28.030227  101931 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:21:28.030239  101931 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:21:28.030252  101931 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:21:28.030265  101931 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:21:28.030276  101931 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:21:28.030290  101931 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:21:28.030299  101931 command_runner.go:130] > # CNI plugins.
	I1212 22:21:28.030308  101931 command_runner.go:130] > [crio.network]
	I1212 22:21:28.030321  101931 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:21:28.030338  101931 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:21:28.030349  101931 command_runner.go:130] > # cni_default_network = ""
	I1212 22:21:28.030362  101931 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:21:28.030373  101931 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:21:28.030386  101931 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:21:28.030394  101931 command_runner.go:130] > # plugin_dirs = [
	I1212 22:21:28.030404  101931 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:21:28.030412  101931 command_runner.go:130] > # ]
	I1212 22:21:28.030423  101931 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:21:28.030433  101931 command_runner.go:130] > [crio.metrics]
	I1212 22:21:28.030443  101931 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:21:28.030453  101931 command_runner.go:130] > # enable_metrics = false
	I1212 22:21:28.030463  101931 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:21:28.030473  101931 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:21:28.030488  101931 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:21:28.030502  101931 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:21:28.030515  101931 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:21:28.030529  101931 command_runner.go:130] > # metrics_collectors = [
	I1212 22:21:28.030539  101931 command_runner.go:130] > # 	"operations",
	I1212 22:21:28.030549  101931 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:21:28.030559  101931 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:21:28.030567  101931 command_runner.go:130] > # 	"operations_errors",
	I1212 22:21:28.030578  101931 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:21:28.030586  101931 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:21:28.030597  101931 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:21:28.030607  101931 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:21:28.030618  101931 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:21:28.030626  101931 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:21:28.030637  101931 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:21:28.030644  101931 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:21:28.030652  101931 command_runner.go:130] > # 	"containers_oom",
	I1212 22:21:28.030662  101931 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:21:28.030672  101931 command_runner.go:130] > # 	"operations_total",
	I1212 22:21:28.030680  101931 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:21:28.030691  101931 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:21:28.030705  101931 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:21:28.030716  101931 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:21:28.030725  101931 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:21:28.030738  101931 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:21:28.030750  101931 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:21:28.030758  101931 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:21:28.030769  101931 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:21:28.030778  101931 command_runner.go:130] > # ]
	I1212 22:21:28.030787  101931 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:21:28.030797  101931 command_runner.go:130] > # metrics_port = 9090
	I1212 22:21:28.030812  101931 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:21:28.030823  101931 command_runner.go:130] > # metrics_socket = ""
	I1212 22:21:28.030835  101931 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:21:28.030846  101931 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:21:28.030860  101931 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:21:28.030871  101931 command_runner.go:130] > # certificate on any modification event.
	I1212 22:21:28.030881  101931 command_runner.go:130] > # metrics_cert = ""
	I1212 22:21:28.030891  101931 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:21:28.030906  101931 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:21:28.030916  101931 command_runner.go:130] > # metrics_key = ""
	I1212 22:21:28.030928  101931 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:21:28.030938  101931 command_runner.go:130] > [crio.tracing]
	I1212 22:21:28.030949  101931 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:21:28.030959  101931 command_runner.go:130] > # enable_tracing = false
	I1212 22:21:28.030971  101931 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:21:28.030982  101931 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:21:28.030994  101931 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:21:28.031006  101931 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:21:28.031020  101931 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:21:28.031029  101931 command_runner.go:130] > [crio.stats]
	I1212 22:21:28.031041  101931 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:21:28.031053  101931 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:21:28.031063  101931 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:21:28.031105  101931 command_runner.go:130] ! time="2023-12-12 22:21:28.023008280Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1212 22:21:28.031130  101931 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:21:28.031229  101931 cni.go:84] Creating CNI manager for ""
	I1212 22:21:28.031247  101931 cni.go:136] 1 nodes found, recommending kindnet
	I1212 22:21:28.031269  101931 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:21:28.031305  101931 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-764961 NodeName:multinode-764961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:21:28.031457  101931 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-764961"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:21:28.031533  101931 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-764961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:21:28.031611  101931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:21:28.038760  101931 command_runner.go:130] > kubeadm
	I1212 22:21:28.038776  101931 command_runner.go:130] > kubectl
	I1212 22:21:28.038779  101931 command_runner.go:130] > kubelet
	I1212 22:21:28.039366  101931 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:21:28.039431  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 22:21:28.046701  101931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1212 22:21:28.061984  101931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:21:28.077149  101931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1212 22:21:28.092299  101931 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1212 22:21:28.095160  101931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:21:28.104280  101931 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961 for IP: 192.168.58.2
	I1212 22:21:28.104357  101931 certs.go:190] acquiring lock for shared ca certs: {Name:mkef1e7b14f91e4f04d1e9cbbafdc8c42ba43b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.104501  101931 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key
	I1212 22:21:28.104544  101931 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key
	I1212 22:21:28.104585  101931 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key
	I1212 22:21:28.104599  101931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt with IP's: []
	I1212 22:21:28.216894  101931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt ...
	I1212 22:21:28.216926  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt: {Name:mk939bfbc29aadf4f5270770d68007da55c82bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.217083  101931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key ...
	I1212 22:21:28.217093  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key: {Name:mkc8c48bc0e64972b49dba343023d4122626cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.217168  101931 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key.cee25041
	I1212 22:21:28.217181  101931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 22:21:28.347023  101931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt.cee25041 ...
	I1212 22:21:28.347051  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt.cee25041: {Name:mkc3f06bfdbb8295a58ccbf5006d0e34a6efdc18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.347192  101931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key.cee25041 ...
	I1212 22:21:28.347203  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key.cee25041: {Name:mka51e0bae213242a648c95959091c104c7b0d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.347270  101931 certs.go:337] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt
	I1212 22:21:28.347361  101931 certs.go:341] copying /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key
	I1212 22:21:28.347415  101931 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.key
	I1212 22:21:28.347428  101931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.crt with IP's: []
	I1212 22:21:28.563388  101931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.crt ...
	I1212 22:21:28.563417  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.crt: {Name:mk5a02978daf29faa2dafce8ead854b8c2ab973a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.563571  101931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.key ...
	I1212 22:21:28.563583  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.key: {Name:mkc26e722064e977288608fc32b4f2eb6d43bec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:28.563658  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 22:21:28.563676  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 22:21:28.563686  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 22:21:28.563698  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 22:21:28.563707  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:21:28.563720  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:21:28.563732  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:21:28.563744  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:21:28.563799  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem (1338 bytes)
	W1212 22:21:28.563832  101931 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399_empty.pem, impossibly tiny 0 bytes
	I1212 22:21:28.563843  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 22:21:28.563869  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:21:28.563899  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:21:28.563923  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem (1675 bytes)
	I1212 22:21:28.563961  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:21:28.563986  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem -> /usr/share/ca-certificates/16399.pem
	I1212 22:21:28.564001  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /usr/share/ca-certificates/163992.pem
	I1212 22:21:28.564012  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:21:28.564626  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 22:21:28.585902  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 22:21:28.606172  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 22:21:28.626100  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 22:21:28.646578  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:21:28.666234  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:21:28.686160  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:21:28.706009  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:21:28.726231  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem --> /usr/share/ca-certificates/16399.pem (1338 bytes)
	I1212 22:21:28.746663  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /usr/share/ca-certificates/163992.pem (1708 bytes)
	I1212 22:21:28.767174  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:21:28.786916  101931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 22:21:28.801453  101931 ssh_runner.go:195] Run: openssl version
	I1212 22:21:28.805894  101931 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1212 22:21:28.805963  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:21:28.813516  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:21:28.816492  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:21:28.816516  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:21:28.816542  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:21:28.822197  101931 command_runner.go:130] > b5213941
	I1212 22:21:28.822363  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:21:28.830096  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16399.pem && ln -fs /usr/share/ca-certificates/16399.pem /etc/ssl/certs/16399.pem"
	I1212 22:21:28.837933  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16399.pem
	I1212 22:21:28.840971  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:08 /usr/share/ca-certificates/16399.pem
	I1212 22:21:28.841009  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:08 /usr/share/ca-certificates/16399.pem
	I1212 22:21:28.841036  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16399.pem
	I1212 22:21:28.847018  101931 command_runner.go:130] > 51391683
	I1212 22:21:28.847074  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16399.pem /etc/ssl/certs/51391683.0"
	I1212 22:21:28.855032  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163992.pem && ln -fs /usr/share/ca-certificates/163992.pem /etc/ssl/certs/163992.pem"
	I1212 22:21:28.863046  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163992.pem
	I1212 22:21:28.865919  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:08 /usr/share/ca-certificates/163992.pem
	I1212 22:21:28.865950  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:08 /usr/share/ca-certificates/163992.pem
	I1212 22:21:28.865981  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163992.pem
	I1212 22:21:28.871815  101931 command_runner.go:130] > 3ec20f2e
	I1212 22:21:28.871968  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163992.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:21:28.879709  101931 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:21:28.882351  101931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:21:28.882383  101931 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:21:28.882423  101931 kubeadm.go:404] StartCluster: {Name:multinode-764961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:21:28.882496  101931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 22:21:28.882542  101931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 22:21:28.914485  101931 cri.go:89] found id: ""
	I1212 22:21:28.914549  101931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 22:21:28.921682  101931 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 22:21:28.921711  101931 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 22:21:28.921723  101931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 22:21:28.922365  101931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 22:21:28.929697  101931 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1212 22:21:28.929757  101931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 22:21:28.936913  101931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 22:21:28.936931  101931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 22:21:28.936938  101931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 22:21:28.936947  101931 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:21:28.936973  101931 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 22:21:28.937001  101931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 22:21:28.979006  101931 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 22:21:28.979030  101931 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 22:21:28.979062  101931 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 22:21:28.979068  101931 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:21:29.012855  101931 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:21:29.012889  101931 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:21:29.012966  101931 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:21:29.012973  101931 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:21:29.013034  101931 kubeadm.go:322] OS: Linux
	I1212 22:21:29.013064  101931 command_runner.go:130] > OS: Linux
	I1212 22:21:29.013120  101931 kubeadm.go:322] CGROUPS_CPU: enabled
	I1212 22:21:29.013131  101931 command_runner.go:130] > CGROUPS_CPU: enabled
	I1212 22:21:29.013208  101931 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1212 22:21:29.013218  101931 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1212 22:21:29.013280  101931 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1212 22:21:29.013293  101931 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1212 22:21:29.013338  101931 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1212 22:21:29.013346  101931 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1212 22:21:29.013392  101931 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1212 22:21:29.013402  101931 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1212 22:21:29.013487  101931 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1212 22:21:29.013498  101931 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1212 22:21:29.013542  101931 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1212 22:21:29.013549  101931 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1212 22:21:29.013600  101931 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1212 22:21:29.013607  101931 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1212 22:21:29.013640  101931 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1212 22:21:29.013646  101931 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1212 22:21:29.072367  101931 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:21:29.072396  101931 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 22:21:29.072555  101931 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:21:29.072565  101931 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 22:21:29.072632  101931 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:21:29.072640  101931 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 22:21:29.255888  101931 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:21:29.259474  101931 out.go:204]   - Generating certificates and keys ...
	I1212 22:21:29.255979  101931 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 22:21:29.259639  101931 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 22:21:29.259656  101931 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 22:21:29.259738  101931 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 22:21:29.259768  101931 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 22:21:29.440732  101931 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:21:29.440763  101931 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 22:21:29.550998  101931 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:21:29.551039  101931 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 22:21:29.848434  101931 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 22:21:29.848472  101931 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 22:21:30.032456  101931 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 22:21:30.032483  101931 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 22:21:30.120893  101931 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 22:21:30.120924  101931 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 22:21:30.121061  101931 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-764961] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 22:21:30.121071  101931 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-764961] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 22:21:30.303334  101931 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 22:21:30.303359  101931 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 22:21:30.303526  101931 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-764961] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 22:21:30.303563  101931 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-764961] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1212 22:21:30.501448  101931 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:21:30.501469  101931 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 22:21:30.626661  101931 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:21:30.626695  101931 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 22:21:30.806960  101931 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 22:21:30.806985  101931 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 22:21:30.807081  101931 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:21:30.807094  101931 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 22:21:31.031104  101931 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:21:31.031130  101931 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 22:21:31.141289  101931 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:21:31.141312  101931 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 22:21:31.305703  101931 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:21:31.305730  101931 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 22:21:31.466763  101931 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:21:31.466773  101931 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 22:21:31.467237  101931 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:21:31.467262  101931 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 22:21:31.469506  101931 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:21:31.471576  101931 out.go:204]   - Booting up control plane ...
	I1212 22:21:31.469583  101931 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 22:21:31.471680  101931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:21:31.471692  101931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 22:21:31.471781  101931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:21:31.471801  101931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 22:21:31.471869  101931 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:21:31.471877  101931 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 22:21:31.480179  101931 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:21:31.480199  101931 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:21:31.480886  101931 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:21:31.480905  101931 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:21:31.480987  101931 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 22:21:31.481000  101931 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:21:31.554260  101931 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:21:31.554284  101931 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 22:21:36.056118  101931 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501983 seconds
	I1212 22:21:36.056156  101931 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.501983 seconds
	I1212 22:21:36.056313  101931 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:21:36.056328  101931 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 22:21:36.066108  101931 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:21:36.066143  101931 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 22:21:36.586461  101931 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:21:36.586488  101931 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 22:21:36.586714  101931 kubeadm.go:322] [mark-control-plane] Marking the node multinode-764961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:21:36.586726  101931 command_runner.go:130] > [mark-control-plane] Marking the node multinode-764961 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 22:21:37.096460  101931 kubeadm.go:322] [bootstrap-token] Using token: euxc6c.30qmqxsmwlpy5stt
	I1212 22:21:37.098112  101931 out.go:204]   - Configuring RBAC rules ...
	I1212 22:21:37.096550  101931 command_runner.go:130] > [bootstrap-token] Using token: euxc6c.30qmqxsmwlpy5stt
	I1212 22:21:37.098256  101931 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:21:37.098286  101931 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 22:21:37.103484  101931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:21:37.103495  101931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 22:21:37.109415  101931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:21:37.109434  101931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 22:21:37.111864  101931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:21:37.111881  101931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 22:21:37.114331  101931 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:21:37.114350  101931 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 22:21:37.116662  101931 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:21:37.116681  101931 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 22:21:37.126266  101931 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:21:37.126286  101931 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 22:21:37.327622  101931 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 22:21:37.327647  101931 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 22:21:37.520677  101931 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 22:21:37.520731  101931 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 22:21:37.521882  101931 kubeadm.go:322] 
	I1212 22:21:37.521985  101931 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 22:21:37.522002  101931 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 22:21:37.522019  101931 kubeadm.go:322] 
	I1212 22:21:37.522124  101931 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 22:21:37.522158  101931 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 22:21:37.522185  101931 kubeadm.go:322] 
	I1212 22:21:37.522224  101931 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 22:21:37.522235  101931 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 22:21:37.522306  101931 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:21:37.522316  101931 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 22:21:37.522383  101931 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:21:37.522394  101931 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 22:21:37.522403  101931 kubeadm.go:322] 
	I1212 22:21:37.522476  101931 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 22:21:37.522486  101931 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 22:21:37.522492  101931 kubeadm.go:322] 
	I1212 22:21:37.522557  101931 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:21:37.522566  101931 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 22:21:37.522571  101931 kubeadm.go:322] 
	I1212 22:21:37.522622  101931 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 22:21:37.522629  101931 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 22:21:37.522686  101931 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:21:37.522695  101931 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 22:21:37.522782  101931 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:21:37.522801  101931 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 22:21:37.522807  101931 kubeadm.go:322] 
	I1212 22:21:37.522909  101931 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:21:37.522919  101931 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 22:21:37.523015  101931 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 22:21:37.523032  101931 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 22:21:37.523054  101931 kubeadm.go:322] 
	I1212 22:21:37.523168  101931 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token euxc6c.30qmqxsmwlpy5stt \
	I1212 22:21:37.523187  101931 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token euxc6c.30qmqxsmwlpy5stt \
	I1212 22:21:37.523312  101931 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f \
	I1212 22:21:37.523324  101931 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f \
	I1212 22:21:37.523351  101931 kubeadm.go:322] 	--control-plane 
	I1212 22:21:37.523361  101931 command_runner.go:130] > 	--control-plane 
	I1212 22:21:37.523367  101931 kubeadm.go:322] 
	I1212 22:21:37.523475  101931 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:21:37.523485  101931 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 22:21:37.523491  101931 kubeadm.go:322] 
	I1212 22:21:37.523622  101931 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token euxc6c.30qmqxsmwlpy5stt \
	I1212 22:21:37.523635  101931 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token euxc6c.30qmqxsmwlpy5stt \
	I1212 22:21:37.523791  101931 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:21:37.523806  101931 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:21:37.526038  101931 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:21:37.526058  101931 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:21:37.526205  101931 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:21:37.526227  101931 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:21:37.526248  101931 cni.go:84] Creating CNI manager for ""
	I1212 22:21:37.526255  101931 cni.go:136] 1 nodes found, recommending kindnet
	I1212 22:21:37.528045  101931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 22:21:37.529791  101931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:21:37.533491  101931 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:21:37.533516  101931 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1212 22:21:37.533529  101931 command_runner.go:130] > Device: 37h/55d	Inode: 573802      Links: 1
	I1212 22:21:37.533536  101931 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:21:37.533546  101931 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1212 22:21:37.533557  101931 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1212 22:21:37.533561  101931 command_runner.go:130] > Change: 2023-12-12 22:02:56.690856677 +0000
	I1212 22:21:37.533567  101931 command_runner.go:130] >  Birth: 2023-12-12 22:02:56.666854231 +0000
	I1212 22:21:37.533616  101931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:21:37.533629  101931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:21:37.549325  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:21:38.151609  101931 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 22:21:38.156056  101931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 22:21:38.162619  101931 command_runner.go:130] > serviceaccount/kindnet created
	I1212 22:21:38.172118  101931 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 22:21:38.176121  101931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 22:21:38.176259  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-764961 minikube.k8s.io/updated_at=2023_12_12T22_21_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:38.176284  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:38.183935  101931 command_runner.go:130] > -16
	I1212 22:21:38.183999  101931 ops.go:34] apiserver oom_adj: -16
	I1212 22:21:38.239936  101931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 22:21:38.240117  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:38.249462  101931 command_runner.go:130] > node/multinode-764961 labeled
	I1212 22:21:38.298322  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:38.301010  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:38.362814  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:38.863638  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:38.924414  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:39.364025  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:39.425733  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:39.863955  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:39.924909  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:40.363835  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:40.423814  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:40.863707  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:40.924493  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:41.363690  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:41.425238  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:41.863189  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:41.926511  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:42.363083  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:42.423472  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:42.863064  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:42.924782  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:43.363332  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:43.424128  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:43.863765  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:43.928995  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:44.363671  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:44.424975  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:44.863087  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:44.921737  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:45.363666  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:45.423717  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:45.863729  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:45.925755  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:46.363586  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:46.426227  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:46.863852  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:46.927502  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:47.363067  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:47.427852  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:47.863383  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:47.923707  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:48.363032  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:48.425013  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:48.863988  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:48.927017  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:49.363660  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:49.427700  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:49.863357  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:49.926283  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:50.363923  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:50.424702  101931 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 22:21:50.863053  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:21:50.929522  101931 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 22:21:50.929549  101931 command_runner.go:130] > default   0         0s
	I1212 22:21:50.929574  101931 kubeadm.go:1088] duration metric: took 12.753382917s to wait for elevateKubeSystemPrivileges.
	I1212 22:21:50.929591  101931 kubeadm.go:406] StartCluster complete in 22.047171099s
	I1212 22:21:50.929611  101931 settings.go:142] acquiring lock: {Name:mk857225ea2f0544984670c00dbb01f431ce59c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:50.929678  101931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:21:50.930615  101931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17761-9643/kubeconfig: {Name:mkd3e8de36f0003ff040c445ac6e47a46685daf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:21:50.930874  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 22:21:50.930989  101931 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 22:21:50.931079  101931 addons.go:69] Setting storage-provisioner=true in profile "multinode-764961"
	I1212 22:21:50.931101  101931 addons.go:231] Setting addon storage-provisioner=true in "multinode-764961"
	I1212 22:21:50.931104  101931 addons.go:69] Setting default-storageclass=true in profile "multinode-764961"
	I1212 22:21:50.931122  101931 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:21:50.931129  101931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-764961"
	I1212 22:21:50.931158  101931 host.go:66] Checking if "multinode-764961" exists ...
	I1212 22:21:50.931255  101931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:21:50.931540  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:50.931620  101931 kapi.go:59] client config for multinode-764961: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:21:50.931946  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:50.932423  101931 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 22:21:50.932662  101931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:21:50.932690  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:50.932702  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:50.932711  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:50.942497  101931 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 22:21:50.942519  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:50.942530  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:50.942538  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:50.942556  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:50.942568  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:50.942578  101931 round_trippers.go:580]     Content-Length: 291
	I1212 22:21:50.942586  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:50 GMT
	I1212 22:21:50.942598  101931 round_trippers.go:580]     Audit-Id: 8609ddfd-74e0-45ae-a6e6-57693dae76a9
	I1212 22:21:50.942630  101931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f85012d3-692f-41ec-aa39-a084333b6df8","resourceVersion":"225","creationTimestamp":"2023-12-12T22:21:37Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:21:50.943027  101931 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f85012d3-692f-41ec-aa39-a084333b6df8","resourceVersion":"225","creationTimestamp":"2023-12-12T22:21:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:21:50.943085  101931 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:21:50.943095  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:50.943102  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:50.943110  101931 round_trippers.go:473]     Content-Type: application/json
	I1212 22:21:50.943116  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:50.948871  101931 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 22:21:50.948891  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:50.948902  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:50.948912  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:50.948920  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:50.948930  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:50.948943  101931 round_trippers.go:580]     Content-Length: 291
	I1212 22:21:50.948952  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:50 GMT
	I1212 22:21:50.948961  101931 round_trippers.go:580]     Audit-Id: 1d2dd9e1-e108-4b88-b6e5-3809d3e69854
	I1212 22:21:50.948989  101931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f85012d3-692f-41ec-aa39-a084333b6df8","resourceVersion":"307","creationTimestamp":"2023-12-12T22:21:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:21:50.949149  101931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:21:50.949180  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:50.949191  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:50.949200  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:50.950890  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:50.950922  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:50.950932  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:50.950944  101931 round_trippers.go:580]     Content-Length: 291
	I1212 22:21:50.950951  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:50 GMT
	I1212 22:21:50.950959  101931 round_trippers.go:580]     Audit-Id: 90c7bdf7-77c8-40ef-b145-c8bb0b104111
	I1212 22:21:50.950966  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:50.950976  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:50.950988  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:50.951072  101931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f85012d3-692f-41ec-aa39-a084333b6df8","resourceVersion":"307","creationTimestamp":"2023-12-12T22:21:37Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 22:21:50.951167  101931 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-764961" context rescaled to 1 replicas
	I1212 22:21:50.951218  101931 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 22:21:50.952827  101931 out.go:177] * Verifying Kubernetes components...
	I1212 22:21:50.951580  101931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:21:50.954332  101931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 22:21:50.954340  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:21:50.955836  101931 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:21:50.954641  101931 kapi.go:59] client config for multinode-764961: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:21:50.955876  101931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 22:21:50.955939  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:50.956128  101931 addons.go:231] Setting addon default-storageclass=true in "multinode-764961"
	I1212 22:21:50.956166  101931 host.go:66] Checking if "multinode-764961" exists ...
	I1212 22:21:50.956702  101931 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:21:50.974002  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:50.976011  101931 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 22:21:50.976038  101931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 22:21:50.976084  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:21:51.000146  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:21:51.045102  101931 command_runner.go:130] > apiVersion: v1
	I1212 22:21:51.045128  101931 command_runner.go:130] > data:
	I1212 22:21:51.045134  101931 command_runner.go:130] >   Corefile: |
	I1212 22:21:51.045140  101931 command_runner.go:130] >     .:53 {
	I1212 22:21:51.045146  101931 command_runner.go:130] >         errors
	I1212 22:21:51.045152  101931 command_runner.go:130] >         health {
	I1212 22:21:51.045159  101931 command_runner.go:130] >            lameduck 5s
	I1212 22:21:51.045165  101931 command_runner.go:130] >         }
	I1212 22:21:51.045170  101931 command_runner.go:130] >         ready
	I1212 22:21:51.045179  101931 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 22:21:51.045188  101931 command_runner.go:130] >            pods insecure
	I1212 22:21:51.045196  101931 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 22:21:51.045205  101931 command_runner.go:130] >            ttl 30
	I1212 22:21:51.045210  101931 command_runner.go:130] >         }
	I1212 22:21:51.045216  101931 command_runner.go:130] >         prometheus :9153
	I1212 22:21:51.045224  101931 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 22:21:51.045231  101931 command_runner.go:130] >            max_concurrent 1000
	I1212 22:21:51.045237  101931 command_runner.go:130] >         }
	I1212 22:21:51.045243  101931 command_runner.go:130] >         cache 30
	I1212 22:21:51.045249  101931 command_runner.go:130] >         loop
	I1212 22:21:51.045254  101931 command_runner.go:130] >         reload
	I1212 22:21:51.045260  101931 command_runner.go:130] >         loadbalance
	I1212 22:21:51.045265  101931 command_runner.go:130] >     }
	I1212 22:21:51.045270  101931 command_runner.go:130] > kind: ConfigMap
	I1212 22:21:51.045275  101931 command_runner.go:130] > metadata:
	I1212 22:21:51.045288  101931 command_runner.go:130] >   creationTimestamp: "2023-12-12T22:21:37Z"
	I1212 22:21:51.045294  101931 command_runner.go:130] >   name: coredns
	I1212 22:21:51.045301  101931 command_runner.go:130] >   namespace: kube-system
	I1212 22:21:51.045314  101931 command_runner.go:130] >   resourceVersion: "221"
	I1212 22:21:51.045326  101931 command_runner.go:130] >   uid: 9a37ddf7-f8f4-40d5-b5e6-b4d673e1fbe8
	I1212 22:21:51.048035  101931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:21:51.048496  101931 kapi.go:59] client config for multinode-764961: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:21:51.048853  101931 node_ready.go:35] waiting up to 6m0s for node "multinode-764961" to be "Ready" ...
	I1212 22:21:51.048950  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:51.048955  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:51.048963  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:51.048968  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:51.049316  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 22:21:51.051410  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:51.051436  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:51.051445  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:51.051451  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:51.051457  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:51 GMT
	I1212 22:21:51.051462  101931 round_trippers.go:580]     Audit-Id: 3be7a887-3c71-4464-8d6b-f9d2f2cbf801
	I1212 22:21:51.051470  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:51.051478  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:51.051659  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:51.052421  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:51.052441  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:51.052452  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:51.052462  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:51.054296  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:51.054317  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:51.054323  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:51.054333  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:51.054342  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:51.054355  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:51.054364  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:51 GMT
	I1212 22:21:51.054376  101931 round_trippers.go:580]     Audit-Id: 2bed378a-e142-4170-a97b-aeab59007e0d
	I1212 22:21:51.054505  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:51.140906  101931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 22:21:51.240755  101931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 22:21:51.555397  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:51.555437  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:51.555445  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:51.555464  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:51.620464  101931 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I1212 22:21:51.620517  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:51.620528  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:51.620538  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:51.620546  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:51.620555  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:51 GMT
	I1212 22:21:51.620562  101931 round_trippers.go:580]     Audit-Id: 894fe34c-1142-46c0-a783-e16d4aa105a2
	I1212 22:21:51.620570  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:51.620768  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:51.827532  101931 command_runner.go:130] > configmap/coredns replaced
	I1212 22:21:51.827611  101931 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1212 22:21:52.052672  101931 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 22:21:52.055854  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:52.055870  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:52.055877  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:52.055884  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:52.058215  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:52.058247  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:52.058257  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:52.058265  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:52.058288  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:52.058299  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:52.058308  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:52 GMT
	I1212 22:21:52.058318  101931 round_trippers.go:580]     Audit-Id: e331d0b1-f8e4-4e7b-8c97-7206ff4fc158
	I1212 22:21:52.058501  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:52.058863  101931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 22:21:52.064576  101931 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 22:21:52.070114  101931 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 22:21:52.075531  101931 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 22:21:52.082910  101931 command_runner.go:130] > pod/storage-provisioner created
	I1212 22:21:52.087591  101931 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 22:21:52.087735  101931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 22:21:52.087751  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:52.087761  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:52.087770  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:52.089291  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:52.089306  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:52.089312  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:52.089317  101931 round_trippers.go:580]     Content-Length: 1273
	I1212 22:21:52.089322  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:52 GMT
	I1212 22:21:52.089330  101931 round_trippers.go:580]     Audit-Id: 9b757291-bebe-444a-8805-1c21f9e1da90
	I1212 22:21:52.089335  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:52.089342  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:52.089347  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:52.089377  101931 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"367"},"items":[{"metadata":{"name":"standard","uid":"627d2fca-48d0-4f77-85a7-5872f58677b6","resourceVersion":"355","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 22:21:52.089709  101931 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"627d2fca-48d0-4f77-85a7-5872f58677b6","resourceVersion":"355","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 22:21:52.089755  101931 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 22:21:52.089768  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:52.089776  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:52.089784  101931 round_trippers.go:473]     Content-Type: application/json
	I1212 22:21:52.089791  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:52.091747  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:52.091763  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:52.091769  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:52 GMT
	I1212 22:21:52.091775  101931 round_trippers.go:580]     Audit-Id: c1a2c284-5bae-4fa4-8284-82c5bf38846c
	I1212 22:21:52.091780  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:52.091788  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:52.091796  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:52.091808  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:52.091819  101931 round_trippers.go:580]     Content-Length: 1220
	I1212 22:21:52.091864  101931 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"627d2fca-48d0-4f77-85a7-5872f58677b6","resourceVersion":"355","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 22:21:52.093577  101931 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 22:21:52.094822  101931 addons.go:502] enable addons completed in 1.163836069s: enabled=[storage-provisioner default-storageclass]
	I1212 22:21:52.555347  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:52.555368  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:52.555376  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:52.555382  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:52.557259  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:52.557285  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:52.557296  101931 round_trippers.go:580]     Audit-Id: 2910597c-d0d7-429e-8f99-ab9bd0a8bfef
	I1212 22:21:52.557304  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:52.557312  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:52.557318  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:52.557330  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:52.557341  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:52 GMT
	I1212 22:21:52.557469  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:53.055951  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:53.055973  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:53.055981  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:53.055988  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:53.058197  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:53.058216  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:53.058222  101931 round_trippers.go:580]     Audit-Id: 72ecafd8-4c23-4569-a715-d463d73f7148
	I1212 22:21:53.058228  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:53.058233  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:53.058238  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:53.058244  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:53.058250  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:53 GMT
	I1212 22:21:53.058369  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:53.058668  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:21:53.555988  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:53.556010  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:53.556020  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:53.556028  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:53.558115  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:53.558142  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:53.558152  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:53.558160  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:53.558168  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:53.558175  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:53 GMT
	I1212 22:21:53.558184  101931 round_trippers.go:580]     Audit-Id: 32c407ae-ab6b-4f95-97b9-7050628132c2
	I1212 22:21:53.558197  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:53.558337  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:54.055945  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:54.055968  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:54.055976  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:54.055981  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:54.058499  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:54.058519  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:54.058526  101931 round_trippers.go:580]     Audit-Id: b3073c89-66b2-4f99-9c00-4752fcebdd65
	I1212 22:21:54.058531  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:54.058536  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:54.058541  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:54.058547  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:54.058552  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:54 GMT
	I1212 22:21:54.058656  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:54.555101  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:54.555128  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:54.555136  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:54.555142  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:54.557257  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:54.557282  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:54.557291  101931 round_trippers.go:580]     Audit-Id: 14d8c79a-10fe-47c4-95c7-3aff02cfe7af
	I1212 22:21:54.557296  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:54.557301  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:54.557306  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:54.557312  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:54.557317  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:54 GMT
	I1212 22:21:54.557426  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:55.054984  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:55.055009  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:55.055019  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:55.055028  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:55.057253  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:55.057280  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:55.057291  101931 round_trippers.go:580]     Audit-Id: d51f2cd9-679e-47be-98d2-317f22a8b7fb
	I1212 22:21:55.057300  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:55.057309  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:55.057331  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:55.057341  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:55.057346  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:55 GMT
	I1212 22:21:55.057438  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:55.554971  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:55.554992  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:55.555000  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:55.555006  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:55.557191  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:55.557216  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:55.557226  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:55.557235  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:55.557243  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:55.557253  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:55.557269  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:55 GMT
	I1212 22:21:55.557278  101931 round_trippers.go:580]     Audit-Id: a6363143-a0bf-4a4c-b8bc-235626561aa6
	I1212 22:21:55.557448  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:55.557775  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:21:56.055011  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:56.055040  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:56.055048  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:56.055054  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:56.057029  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:21:56.057053  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:56.057062  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:56.057071  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:56.057080  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:56 GMT
	I1212 22:21:56.057090  101931 round_trippers.go:580]     Audit-Id: 49f12d65-201b-4c2d-a585-25315082fc5f
	I1212 22:21:56.057102  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:56.057110  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:56.057237  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:56.554965  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:56.554987  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:56.554994  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:56.555000  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:56.557184  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:56.557214  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:56.557223  101931 round_trippers.go:580]     Audit-Id: fd1b2a18-cc2a-4bbd-848a-4afcf5f8828a
	I1212 22:21:56.557231  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:56.557238  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:56.557245  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:56.557254  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:56.557263  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:56 GMT
	I1212 22:21:56.557388  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:57.055908  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:57.055929  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:57.055937  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:57.055943  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:57.058287  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:57.058319  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:57.058330  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:57.058338  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:57.058347  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:57.058361  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:57 GMT
	I1212 22:21:57.058367  101931 round_trippers.go:580]     Audit-Id: 446cb547-07b4-4ad7-a13c-74379e2a70d4
	I1212 22:21:57.058372  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:57.058494  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:57.555006  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:57.555033  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:57.555041  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:57.555047  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:57.557316  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:57.557336  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:57.557342  101931 round_trippers.go:580]     Audit-Id: b4f06630-647e-46a5-8822-e389005ad4e1
	I1212 22:21:57.557348  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:57.557353  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:57.557358  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:57.557363  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:57.557368  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:57 GMT
	I1212 22:21:57.557463  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:57.557793  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:21:58.055135  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:58.055160  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:58.055171  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:58.055179  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:58.057299  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:58.057319  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:58.057325  101931 round_trippers.go:580]     Audit-Id: 46e3fc12-609a-4116-b96a-48220d66c251
	I1212 22:21:58.057331  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:58.057338  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:58.057346  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:58.057354  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:58.057365  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:58 GMT
	I1212 22:21:58.057493  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:58.555097  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:58.555117  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:58.555125  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:58.555131  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:58.558826  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:21:58.558846  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:58.558853  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:58.558859  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:58.558864  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:58.558872  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:58.558880  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:58 GMT
	I1212 22:21:58.558889  101931 round_trippers.go:580]     Audit-Id: d244ed9a-d90f-4dc0-a180-bde44ef53870
	I1212 22:21:58.559113  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:59.055702  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:59.055729  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:59.055737  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:59.055747  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:59.057896  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:59.057920  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:59.057929  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:59.057937  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:59 GMT
	I1212 22:21:59.057944  101931 round_trippers.go:580]     Audit-Id: 590ae0c3-44a8-4fd9-a318-3a42597f168c
	I1212 22:21:59.057951  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:59.057959  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:59.057966  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:59.058076  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:59.555709  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:21:59.555736  101931 round_trippers.go:469] Request Headers:
	I1212 22:21:59.555749  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:21:59.555759  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:21:59.557829  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:21:59.557848  101931 round_trippers.go:577] Response Headers:
	I1212 22:21:59.557855  101931 round_trippers.go:580]     Audit-Id: 295758d2-27d0-4a63-81f0-8a5335d92a09
	I1212 22:21:59.557864  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:21:59.557873  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:21:59.557887  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:21:59.557895  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:21:59.557906  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:21:59 GMT
	I1212 22:21:59.558056  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:21:59.558392  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:00.055688  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:00.055715  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:00.055727  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:00.055737  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:00.057819  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:00.057842  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:00.057852  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:00 GMT
	I1212 22:22:00.057860  101931 round_trippers.go:580]     Audit-Id: 572720b6-ff7a-4252-845e-7c53e4d02129
	I1212 22:22:00.057868  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:00.057878  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:00.057886  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:00.057894  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:00.058047  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:00.555701  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:00.555722  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:00.555730  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:00.555737  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:00.557861  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:00.557879  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:00.557886  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:00 GMT
	I1212 22:22:00.557891  101931 round_trippers.go:580]     Audit-Id: 1085aca6-b340-48c4-8d0a-2bbe14043767
	I1212 22:22:00.557896  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:00.557901  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:00.557906  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:00.557911  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:00.558117  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:01.055798  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:01.055819  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:01.055827  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:01.055832  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:01.058031  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:01.058056  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:01.058066  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:01 GMT
	I1212 22:22:01.058074  101931 round_trippers.go:580]     Audit-Id: 6a087ddc-6f6f-4086-9042-d21d76ad0457
	I1212 22:22:01.058082  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:01.058090  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:01.058098  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:01.058110  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:01.058214  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:01.556009  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:01.556033  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:01.556045  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:01.556053  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:01.558353  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:01.558373  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:01.558380  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:01.558386  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:01.558394  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:01.558403  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:01.558410  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:01 GMT
	I1212 22:22:01.558417  101931 round_trippers.go:580]     Audit-Id: e06c7f76-5543-4b7b-93be-a01df06854e0
	I1212 22:22:01.558535  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:01.558840  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:02.055112  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:02.055136  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:02.055143  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:02.055149  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:02.057342  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:02.057361  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:02.057370  101931 round_trippers.go:580]     Audit-Id: 80e66d50-36e7-4469-9504-fc6537c82bd4
	I1212 22:22:02.057378  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:02.057385  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:02.057438  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:02.057448  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:02.057455  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:02 GMT
	I1212 22:22:02.057558  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:02.555052  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:02.555073  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:02.555083  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:02.555089  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:02.557202  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:02.557219  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:02.557225  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:02.557231  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:02.557236  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:02 GMT
	I1212 22:22:02.557249  101931 round_trippers.go:580]     Audit-Id: 3b605009-f584-472e-bf54-5a9f81846389
	I1212 22:22:02.557258  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:02.557280  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:02.557397  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:03.054990  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:03.055026  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:03.055034  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:03.055039  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:03.057451  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:03.057471  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:03.057478  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:03.057484  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:03.057489  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:03.057494  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:03.057504  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:03 GMT
	I1212 22:22:03.057512  101931 round_trippers.go:580]     Audit-Id: bf50b7a1-966d-4d71-b113-e9652428a4a7
	I1212 22:22:03.057645  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:03.555114  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:03.555145  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:03.555154  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:03.555160  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:03.557396  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:03.557418  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:03.557428  101931 round_trippers.go:580]     Audit-Id: 7a860bdd-c421-4fbf-86f2-72f8d08398f0
	I1212 22:22:03.557436  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:03.557445  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:03.557452  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:03.557460  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:03.557467  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:03 GMT
	I1212 22:22:03.557603  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:04.055265  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:04.055288  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:04.055296  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:04.055308  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:04.057561  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:04.057580  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:04.057586  101931 round_trippers.go:580]     Audit-Id: 0cd32e0e-b0b1-461a-a390-0c195df7ad39
	I1212 22:22:04.057592  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:04.057597  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:04.057605  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:04.057613  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:04.057627  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:04 GMT
	I1212 22:22:04.057741  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:04.058074  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:04.555373  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:04.555398  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:04.555410  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:04.555420  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:04.557575  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:04.557600  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:04.557607  101931 round_trippers.go:580]     Audit-Id: 13916f8d-5d4f-4291-810d-4c3ec6952f14
	I1212 22:22:04.557612  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:04.557617  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:04.557623  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:04.557628  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:04.557636  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:04 GMT
	I1212 22:22:04.557794  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:05.055290  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:05.055320  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:05.055328  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:05.055334  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:05.057535  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:05.057559  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:05.057571  101931 round_trippers.go:580]     Audit-Id: 911ed583-167f-48eb-8afd-e599073f1918
	I1212 22:22:05.057579  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:05.057590  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:05.057599  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:05.057611  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:05.057617  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:05 GMT
	I1212 22:22:05.057741  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:05.555044  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:05.555067  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:05.555076  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:05.555082  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:05.557039  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:05.557061  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:05.557068  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:05.557073  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:05.557078  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:05.557083  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:05 GMT
	I1212 22:22:05.557091  101931 round_trippers.go:580]     Audit-Id: 08ef74fa-65d8-4552-913c-c66746040a38
	I1212 22:22:05.557099  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:05.557229  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:06.055859  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:06.055880  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:06.055888  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:06.055893  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:06.057704  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:06.057726  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:06.057748  101931 round_trippers.go:580]     Audit-Id: d058afe3-e376-458a-bc06-bccded7365f1
	I1212 22:22:06.057756  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:06.057766  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:06.057774  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:06.057790  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:06.057800  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:06 GMT
	I1212 22:22:06.057892  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:06.058233  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:06.555741  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:06.555760  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:06.555768  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:06.555774  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:06.558087  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:06.558108  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:06.558118  101931 round_trippers.go:580]     Audit-Id: d476abe0-073a-40e2-a81f-8e68135ea174
	I1212 22:22:06.558127  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:06.558135  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:06.558144  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:06.558157  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:06.558167  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:06 GMT
	I1212 22:22:06.558288  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:07.055940  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:07.055962  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:07.055970  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:07.055976  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:07.058090  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:07.058112  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:07.058121  101931 round_trippers.go:580]     Audit-Id: 18265ab8-e6f7-4b58-8fea-533ed8439c44
	I1212 22:22:07.058130  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:07.058138  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:07.058151  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:07.058159  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:07.058169  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:07 GMT
	I1212 22:22:07.058285  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:07.555885  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:07.555909  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:07.555920  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:07.555928  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:07.558634  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:07.558660  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:07.558670  101931 round_trippers.go:580]     Audit-Id: 691f491e-3a56-4150-a7b3-8ed5a17d84bf
	I1212 22:22:07.558679  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:07.558688  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:07.558699  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:07.558706  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:07.558714  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:07 GMT
	I1212 22:22:07.558831  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:08.055417  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:08.055439  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:08.055447  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:08.055453  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:08.057702  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:08.057724  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:08.057734  101931 round_trippers.go:580]     Audit-Id: 55746e84-ef4f-423b-9bf8-14422ea7b628
	I1212 22:22:08.057745  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:08.057752  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:08.057761  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:08.057767  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:08.057774  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:08 GMT
	I1212 22:22:08.057873  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:08.555440  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:08.555462  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:08.555470  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:08.555476  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:08.557842  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:08.557864  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:08.557875  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:08 GMT
	I1212 22:22:08.557888  101931 round_trippers.go:580]     Audit-Id: dd96832f-a0e3-4641-917a-2be4bbf82da0
	I1212 22:22:08.557898  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:08.557907  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:08.557919  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:08.557933  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:08.558086  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:08.558529  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:09.055634  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:09.055654  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:09.055662  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:09.055668  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:09.057835  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:09.057862  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:09.057873  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:09 GMT
	I1212 22:22:09.057880  101931 round_trippers.go:580]     Audit-Id: 00f0d6f2-e7ba-4cb1-94dc-4bedaecce06e
	I1212 22:22:09.057886  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:09.057891  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:09.057900  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:09.057905  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:09.058012  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:09.555649  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:09.555672  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:09.555681  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:09.555687  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:09.557848  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:09.557873  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:09.557884  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:09.557893  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:09.557906  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:09 GMT
	I1212 22:22:09.557920  101931 round_trippers.go:580]     Audit-Id: 08cf2956-8b94-4b8d-8da1-8d1a59475b5d
	I1212 22:22:09.557927  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:09.557933  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:09.558052  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:10.055664  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:10.055685  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:10.055693  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:10.055699  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:10.057951  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:10.057977  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:10.057987  101931 round_trippers.go:580]     Audit-Id: dc86f6bb-cca5-4389-881a-9ec05c1775be
	I1212 22:22:10.057996  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:10.058003  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:10.058012  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:10.058024  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:10.058032  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:10 GMT
	I1212 22:22:10.058126  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:10.555753  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:10.555774  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:10.555782  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:10.555788  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:10.558069  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:10.558092  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:10.558101  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:10.558110  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:10 GMT
	I1212 22:22:10.558121  101931 round_trippers.go:580]     Audit-Id: 0c9b774e-46a2-4a74-9057-b15a949431aa
	I1212 22:22:10.558130  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:10.558137  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:10.558149  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:10.558321  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:10.558632  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:11.055023  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:11.055044  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:11.055052  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:11.055058  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:11.057197  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:11.057225  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:11.057235  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:11.057243  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:11.057252  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:11.057260  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:11.057269  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:11 GMT
	I1212 22:22:11.057282  101931 round_trippers.go:580]     Audit-Id: dbf789d3-a1f0-4db8-9123-f0e9258b7b5d
	I1212 22:22:11.057393  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:11.555023  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:11.555043  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:11.555051  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:11.555057  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:11.557185  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:11.557204  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:11.557210  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:11.557215  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:11.557221  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:11.557226  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:11 GMT
	I1212 22:22:11.557231  101931 round_trippers.go:580]     Audit-Id: ad51b179-639b-4239-86d4-64b591e73668
	I1212 22:22:11.557236  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:11.557381  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:12.054937  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:12.054965  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:12.054972  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:12.054979  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:12.057111  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:12.057132  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:12.057141  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:12.057148  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:12.057155  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:12 GMT
	I1212 22:22:12.057162  101931 round_trippers.go:580]     Audit-Id: 655c7c47-754b-4cb4-b686-64fa345bbd3d
	I1212 22:22:12.057170  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:12.057181  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:12.057273  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:12.555890  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:12.555910  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:12.555918  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:12.555924  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:12.558114  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:12.558135  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:12.558147  101931 round_trippers.go:580]     Audit-Id: 0f3e2b49-7b2d-47f1-8085-d3994a7c564e
	I1212 22:22:12.558156  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:12.558163  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:12.558172  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:12.558185  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:12.558195  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:12 GMT
	I1212 22:22:12.558316  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:13.055921  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:13.055942  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:13.055952  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:13.055958  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:13.058220  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:13.058240  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:13.058249  101931 round_trippers.go:580]     Audit-Id: b0eb6ff9-5b2e-436d-87a1-c7856903f023
	I1212 22:22:13.058256  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:13.058263  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:13.058271  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:13.058279  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:13.058288  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:13 GMT
	I1212 22:22:13.058421  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:13.058727  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:13.555059  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:13.555086  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:13.555099  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:13.555110  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:13.557344  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:13.557367  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:13.557376  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:13.557384  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:13.557391  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:13.557399  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:13.557407  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:13 GMT
	I1212 22:22:13.557448  101931 round_trippers.go:580]     Audit-Id: e20e9a01-a496-4259-a79e-8f64bf7b45f4
	I1212 22:22:13.557601  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:14.055132  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:14.055158  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:14.055166  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:14.055173  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:14.057332  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:14.057371  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:14.057378  101931 round_trippers.go:580]     Audit-Id: 9f0448b5-888c-49d4-aedf-6af7884fb314
	I1212 22:22:14.057383  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:14.057389  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:14.057394  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:14.057404  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:14.057409  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:14 GMT
	I1212 22:22:14.057506  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:14.555082  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:14.555105  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:14.555112  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:14.555118  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:14.557688  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:14.557710  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:14.557717  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:14 GMT
	I1212 22:22:14.557723  101931 round_trippers.go:580]     Audit-Id: 1778e26b-2d62-4365-8a33-194333ecbed1
	I1212 22:22:14.557728  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:14.557734  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:14.557739  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:14.557744  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:14.557997  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:15.055401  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:15.055437  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:15.055448  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:15.055457  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:15.057522  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:15.057541  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:15.057550  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:15.057556  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:15 GMT
	I1212 22:22:15.057564  101931 round_trippers.go:580]     Audit-Id: 3f10a5bb-742a-43d2-8419-69b968301dea
	I1212 22:22:15.057571  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:15.057580  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:15.057589  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:15.057685  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:15.555230  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:15.555265  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:15.555275  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:15.555283  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:15.557667  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:15.557695  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:15.557705  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:15.557713  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:15 GMT
	I1212 22:22:15.557721  101931 round_trippers.go:580]     Audit-Id: 5c33b9c4-405d-4449-824b-bf48fd220ac8
	I1212 22:22:15.557729  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:15.557745  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:15.557754  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:15.557914  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:15.558364  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:16.055418  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:16.055438  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:16.055445  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:16.055451  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:16.057544  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:16.057567  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:16.057577  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:16.057585  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:16.057592  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:16.057600  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:16.057609  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:16 GMT
	I1212 22:22:16.057622  101931 round_trippers.go:580]     Audit-Id: 49bdf514-bcb9-4e5a-bffc-9296dba958ca
	I1212 22:22:16.057741  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:16.555787  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:16.555811  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:16.555819  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:16.555826  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:16.558058  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:16.558076  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:16.558082  101931 round_trippers.go:580]     Audit-Id: aef48735-5d8d-439d-a545-8c4d862f6d50
	I1212 22:22:16.558088  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:16.558093  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:16.558100  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:16.558108  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:16.558119  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:16 GMT
	I1212 22:22:16.558247  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:17.055927  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:17.055965  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:17.055974  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:17.055981  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:17.058160  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:17.058185  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:17.058195  101931 round_trippers.go:580]     Audit-Id: 419da9ad-3889-4f82-9135-a68a7f023245
	I1212 22:22:17.058201  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:17.058209  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:17.058217  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:17.058226  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:17.058235  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:17 GMT
	I1212 22:22:17.058369  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:17.554941  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:17.554981  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:17.554993  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:17.555001  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:17.557056  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:17.557083  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:17.557094  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:17.557104  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:17 GMT
	I1212 22:22:17.557113  101931 round_trippers.go:580]     Audit-Id: fec82bb1-2860-461b-bc7d-e82e52eb0b5a
	I1212 22:22:17.557119  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:17.557124  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:17.557129  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:17.557360  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:18.055948  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:18.055970  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:18.055978  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:18.055994  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:18.058271  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:18.058288  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:18.058295  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:18.058300  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:18.058306  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:18 GMT
	I1212 22:22:18.058314  101931 round_trippers.go:580]     Audit-Id: ca288596-e41f-4b83-85b4-7ce2ddb184a5
	I1212 22:22:18.058321  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:18.058329  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:18.058470  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:18.058755  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:18.555049  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:18.555069  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:18.555077  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:18.555083  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:18.557188  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:18.557206  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:18.557212  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:18.557217  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:18.557225  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:18 GMT
	I1212 22:22:18.557233  101931 round_trippers.go:580]     Audit-Id: d897a008-125a-4b94-85be-b448fd17e175
	I1212 22:22:18.557241  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:18.557249  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:18.557362  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:19.055934  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:19.055962  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:19.055970  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:19.055976  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:19.058055  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:19.058079  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:19.058088  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:19.058096  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:19.058103  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:19.058110  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:19 GMT
	I1212 22:22:19.058117  101931 round_trippers.go:580]     Audit-Id: 12a2c9b6-b6b1-4e77-b405-b66967e64d44
	I1212 22:22:19.058129  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:19.058231  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:19.555912  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:19.555934  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:19.555954  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:19.555963  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:19.558172  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:19.558190  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:19.558196  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:19.558202  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:19.558207  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:19.558212  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:19.558217  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:19 GMT
	I1212 22:22:19.558227  101931 round_trippers.go:580]     Audit-Id: 93594231-184b-490c-b421-740916c3f8d0
	I1212 22:22:19.558442  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:20.055047  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:20.055069  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:20.055078  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:20.055084  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:20.057409  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:20.057432  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:20.057443  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:20.057452  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:20 GMT
	I1212 22:22:20.057458  101931 round_trippers.go:580]     Audit-Id: 9385785d-b64f-42a1-8b1a-85aadd424b7c
	I1212 22:22:20.057463  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:20.057471  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:20.057480  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:20.057600  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:20.555133  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:20.555154  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:20.555162  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:20.555168  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:20.557282  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:20.557303  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:20.557311  101931 round_trippers.go:580]     Audit-Id: 2af9e44b-1450-4217-9e0d-0c38fd9d65c9
	I1212 22:22:20.557317  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:20.557322  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:20.557327  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:20.557333  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:20.557338  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:20 GMT
	I1212 22:22:20.557529  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:20.557863  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:21.055116  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:21.055155  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:21.055175  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:21.055191  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:21.057536  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:21.057555  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:21.057562  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:21.057569  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:21 GMT
	I1212 22:22:21.057574  101931 round_trippers.go:580]     Audit-Id: 156ce3eb-2257-484a-ae0f-3909c3a3e6e7
	I1212 22:22:21.057579  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:21.057584  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:21.057589  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:21.057693  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:21.555503  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:21.555527  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:21.555534  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:21.555540  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:21.557786  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:21.557808  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:21.557817  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:21.557824  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:21.557831  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:21.557839  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:21.557847  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:21 GMT
	I1212 22:22:21.557858  101931 round_trippers.go:580]     Audit-Id: 03d93f58-0c1f-4dcf-8653-7e8238bf636d
	I1212 22:22:21.558044  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:22.055631  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:22.055653  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:22.055661  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:22.055672  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:22.058009  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:22.058035  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:22.058044  101931 round_trippers.go:580]     Audit-Id: 931e4566-14d8-4378-9b64-0f46efeee5bd
	I1212 22:22:22.058052  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:22.058059  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:22.058067  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:22.058074  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:22.058082  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:22 GMT
	I1212 22:22:22.058250  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:22.555894  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:22.555918  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:22.555926  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:22.555932  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:22.558043  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:22.558068  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:22.558079  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:22.558089  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:22.558098  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:22 GMT
	I1212 22:22:22.558106  101931 round_trippers.go:580]     Audit-Id: 6d1ec2eb-ddf4-407b-9be8-0c9eef51e1d9
	I1212 22:22:22.558118  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:22.558129  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:22.558238  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"293","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1212 22:22:22.558540  101931 node_ready.go:58] node "multinode-764961" has status "Ready":"False"
	I1212 22:22:23.055867  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:23.055889  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.055944  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.055961  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.058038  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:23.058053  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.058059  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.058064  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.058069  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.058074  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.058081  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.058086  101931 round_trippers.go:580]     Audit-Id: 3ca6a948-39c8-48d7-a818-763e8785847f
	I1212 22:22:23.058229  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:23.058541  101931 node_ready.go:49] node "multinode-764961" has status "Ready":"True"
	I1212 22:22:23.058558  101931 node_ready.go:38] duration metric: took 32.009682004s waiting for node "multinode-764961" to be "Ready" ...
	I1212 22:22:23.058568  101931 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:22:23.058640  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:22:23.058650  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.058657  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.058663  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.061777  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:22:23.061799  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.061808  101931 round_trippers.go:580]     Audit-Id: 7b1b6ff5-8e6e-4451-bbdc-752fb0009a1f
	I1212 22:22:23.061816  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.061825  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.061837  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.061847  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.061861  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.062348  101931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"390"},"items":[{"metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"390","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1212 22:22:23.065217  101931 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:23.065281  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-b6lvq
	I1212 22:22:23.065290  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.065296  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.065302  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.067108  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:23.067127  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.067133  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.067138  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.067143  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.067149  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.067154  101931 round_trippers.go:580]     Audit-Id: e5a38f74-6e26-4c9a-a15c-30e71a32584a
	I1212 22:22:23.067161  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.067276  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"390","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 22:22:23.067673  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:23.067686  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.067693  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.067699  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.069296  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:23.069312  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.069321  101931 round_trippers.go:580]     Audit-Id: 259cc8da-3e74-48ed-85bd-c608e81ba0fe
	I1212 22:22:23.069329  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.069336  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.069345  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.069354  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.069367  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.069497  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:23.069911  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-b6lvq
	I1212 22:22:23.069924  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.069934  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.069944  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.071343  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:23.071359  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.071367  101931 round_trippers.go:580]     Audit-Id: 0c4607af-21ce-4dbf-9a2f-21b488759f76
	I1212 22:22:23.071375  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.071382  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.071393  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.071405  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.071414  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.071565  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"390","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 22:22:23.071914  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:23.071927  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.071937  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.071946  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.073382  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:23.073395  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.073401  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.073406  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.073411  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.073416  101931 round_trippers.go:580]     Audit-Id: 9e1c5e76-21fe-4d07-ac4d-951ae9905488
	I1212 22:22:23.073421  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.073429  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.073547  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:23.574328  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-b6lvq
	I1212 22:22:23.574353  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.574366  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.574376  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.576453  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:23.576469  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.576476  101931 round_trippers.go:580]     Audit-Id: bb48d6b9-33b3-4dd2-835b-1338883cf7e7
	I1212 22:22:23.576489  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.576496  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.576504  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.576512  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.576523  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.576657  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"390","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 22:22:23.577069  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:23.577083  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:23.577090  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:23.577098  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:23.578801  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:23.578824  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:23.578834  101931 round_trippers.go:580]     Audit-Id: 21851df7-e723-4b34-933d-d03c16086ea8
	I1212 22:22:23.578841  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:23.578850  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:23.578857  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:23.578871  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:23.578878  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:23 GMT
	I1212 22:22:23.579004  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.074623  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-b6lvq
	I1212 22:22:24.074652  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.074662  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.074670  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.076924  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:24.076951  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.076960  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.076968  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.076977  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.076986  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.077003  101931 round_trippers.go:580]     Audit-Id: 2658180c-6216-45f2-b8df-1a8fc65d365a
	I1212 22:22:24.077016  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.077132  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"403","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1212 22:22:24.077572  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.077586  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.077593  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.077604  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.079453  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.079473  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.079482  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.079489  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.079497  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.079505  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.079515  101931 round_trippers.go:580]     Audit-Id: 76f329e3-a6b6-49dd-ad48-b9f66c41f5b7
	I1212 22:22:24.079527  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.079668  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.079955  101931 pod_ready.go:92] pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:24.079972  101931 pod_ready.go:81] duration metric: took 1.01473096s waiting for pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.079984  101931 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.080033  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-764961
	I1212 22:22:24.080042  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.080052  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.080062  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.081700  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.081716  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.081723  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.081735  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.081740  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.081745  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.081753  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.081758  101931 round_trippers.go:580]     Audit-Id: d186f5a4-f7dc-4bb9-b78a-437d70f22f7b
	I1212 22:22:24.081894  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-764961","namespace":"kube-system","uid":"5295004b-e5f0-4870-9c31-a49e4912eb6b","resourceVersion":"260","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"b159538976e895ac2cf46c4cbb67dcbf","kubernetes.io/config.mirror":"b159538976e895ac2cf46c4cbb67dcbf","kubernetes.io/config.seen":"2023-12-12T22:21:37.367840499Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1212 22:22:24.082235  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.082248  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.082254  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.082260  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.083924  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.083944  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.083953  101931 round_trippers.go:580]     Audit-Id: 8df3bcdc-4482-40d1-a0a4-357dde6df1e6
	I1212 22:22:24.083961  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.083970  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.083985  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.083994  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.084005  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.084103  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.084382  101931 pod_ready.go:92] pod "etcd-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:24.084398  101931 pod_ready.go:81] duration metric: took 4.406785ms waiting for pod "etcd-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.084409  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.084454  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-764961
	I1212 22:22:24.084461  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.084467  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.084473  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.086101  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.086119  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.086128  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.086136  101931 round_trippers.go:580]     Audit-Id: 452e5cdd-984e-4cf9-8428-5cf8695efc54
	I1212 22:22:24.086145  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.086188  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.086230  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.086244  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.086382  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-764961","namespace":"kube-system","uid":"9570752b-45ee-405d-a6e2-fc0b9aa28c7b","resourceVersion":"295","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e2d2f9495e644af7e6228f8e856d9854","kubernetes.io/config.mirror":"e2d2f9495e644af7e6228f8e856d9854","kubernetes.io/config.seen":"2023-12-12T22:21:37.367844265Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1212 22:22:24.086781  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.086795  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.086802  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.086809  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.088375  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.088397  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.088407  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.088416  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.088427  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.088437  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.088447  101931 round_trippers.go:580]     Audit-Id: a81901f6-26ce-43bf-a511-c4cc2c6ea037
	I1212 22:22:24.088457  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.088600  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.088880  101931 pod_ready.go:92] pod "kube-apiserver-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:24.088895  101931 pod_ready.go:81] duration metric: took 4.475972ms waiting for pod "kube-apiserver-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.088907  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.088968  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-764961
	I1212 22:22:24.088980  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.088990  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.088999  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.090630  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:24.090643  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.090652  101931 round_trippers.go:580]     Audit-Id: ede1d475-6d39-4d51-9d82-f5cb52d90a54
	I1212 22:22:24.090661  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.090669  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.090679  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.090691  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.090702  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.090852  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-764961","namespace":"kube-system","uid":"01087a6d-6662-4b6c-8793-8a9da414ac2e","resourceVersion":"261","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1359d7bb8db476a8baa267a12bd5c655","kubernetes.io/config.mirror":"1359d7bb8db476a8baa267a12bd5c655","kubernetes.io/config.seen":"2023-12-12T22:21:31.876138175Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1212 22:22:24.256531  101931 request.go:629] Waited for 165.335459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.256597  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.256604  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.256614  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.256634  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.258690  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:24.258707  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.258714  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.258719  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.258725  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.258730  101931 round_trippers.go:580]     Audit-Id: eba72d6e-ce66-4402-9c12-d80ef91108ed
	I1212 22:22:24.258735  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.258744  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.258897  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.259181  101931 pod_ready.go:92] pod "kube-controller-manager-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:24.259194  101931 pod_ready.go:81] duration metric: took 170.280209ms waiting for pod "kube-controller-manager-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.259204  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smjqf" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.456625  101931 request.go:629] Waited for 197.350471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smjqf
	I1212 22:22:24.456690  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smjqf
	I1212 22:22:24.456698  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.456713  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.456727  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.458780  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:24.458805  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.458814  101931 round_trippers.go:580]     Audit-Id: 4a47ac11-d060-4ffa-9181-6b989b59625a
	I1212 22:22:24.458822  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.458830  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.458839  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.458851  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.458858  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.458976  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-smjqf","generateName":"kube-proxy-","namespace":"kube-system","uid":"00b947bc-a444-4666-a553-2d8a2c47b671","resourceVersion":"369","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae66b779-df4d-4acd-be39-7df3a52caef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae66b779-df4d-4acd-be39-7df3a52caef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1212 22:22:24.656749  101931 request.go:629] Waited for 197.371904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.656821  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:24.656826  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.656833  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.656844  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.659050  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:24.659068  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.659074  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.659081  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.659086  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.659091  101931 round_trippers.go:580]     Audit-Id: 4a721799-1aa9-45cd-a9e1-d7809cb283de
	I1212 22:22:24.659103  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.659111  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.659252  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:24.659604  101931 pod_ready.go:92] pod "kube-proxy-smjqf" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:24.659623  101931 pod_ready.go:81] duration metric: took 400.41158ms waiting for pod "kube-proxy-smjqf" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.659632  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:24.855954  101931 request.go:629] Waited for 196.262766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-764961
	I1212 22:22:24.856033  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-764961
	I1212 22:22:24.856047  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:24.856059  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:24.856069  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:24.858295  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:24.858315  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:24.858322  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:24.858328  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:24.858333  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:24.858338  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:24.858344  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:24 GMT
	I1212 22:22:24.858352  101931 round_trippers.go:580]     Audit-Id: 438eb410-53dc-48ae-8a56-b646b05475b8
	I1212 22:22:24.858474  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-764961","namespace":"kube-system","uid":"7f50f24e-9282-404b-9242-703201ac2c66","resourceVersion":"259","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0b8d03fbc8eff19771ab68a013adbf93","kubernetes.io/config.mirror":"0b8d03fbc8eff19771ab68a013adbf93","kubernetes.io/config.seen":"2023-12-12T22:21:37.367847050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1212 22:22:25.056199  101931 request.go:629] Waited for 197.380372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:25.056271  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:22:25.056279  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.056286  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.056293  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.058480  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:25.058507  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.058518  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.058528  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.058535  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.058543  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.058552  101931 round_trippers.go:580]     Audit-Id: 2a3127a7-bb44-424d-96ad-b9e95ceabfc0
	I1212 22:22:25.058564  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.058666  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:22:25.059135  101931 pod_ready.go:92] pod "kube-scheduler-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:22:25.059161  101931 pod_ready.go:81] duration metric: took 399.520726ms waiting for pod "kube-scheduler-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:22:25.059186  101931 pod_ready.go:38] duration metric: took 2.000591228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:22:25.059211  101931 api_server.go:52] waiting for apiserver process to appear ...
	I1212 22:22:25.059272  101931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:22:25.068752  101931 command_runner.go:130] > 1443
	I1212 22:22:25.069538  101931 api_server.go:72] duration metric: took 34.118283017s to wait for apiserver process to appear ...
	I1212 22:22:25.069562  101931 api_server.go:88] waiting for apiserver healthz status ...
	I1212 22:22:25.069581  101931 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1212 22:22:25.074290  101931 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1212 22:22:25.074344  101931 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1212 22:22:25.074352  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.074360  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.074366  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.075184  101931 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 22:22:25.075197  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.075202  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.075208  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.075213  101931 round_trippers.go:580]     Content-Length: 264
	I1212 22:22:25.075218  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.075226  101931 round_trippers.go:580]     Audit-Id: 53fbf513-564d-4c94-8134-c51821e90e87
	I1212 22:22:25.075231  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.075239  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.075252  101931 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 22:22:25.075322  101931 api_server.go:141] control plane version: v1.28.4
	I1212 22:22:25.075336  101931 api_server.go:131] duration metric: took 5.768454ms to wait for apiserver health ...
	I1212 22:22:25.075342  101931 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 22:22:25.256697  101931 request.go:629] Waited for 181.300079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:22:25.256773  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:22:25.256785  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.256797  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.256808  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.259837  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:22:25.259861  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.259871  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.259879  101931 round_trippers.go:580]     Audit-Id: ee07c1c4-a351-4eef-b126-d960bcd2f57b
	I1212 22:22:25.259886  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.259896  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.259909  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.259922  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.260271  101931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"403","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1212 22:22:25.261906  101931 system_pods.go:59] 8 kube-system pods found
	I1212 22:22:25.261944  101931 system_pods.go:61] "coredns-5dd5756b68-b6lvq" [b130b370-8465-4b9a-973d-8ff1bb6df10a] Running
	I1212 22:22:25.261955  101931 system_pods.go:61] "etcd-multinode-764961" [5295004b-e5f0-4870-9c31-a49e4912eb6b] Running
	I1212 22:22:25.261962  101931 system_pods.go:61] "kindnet-5fp6n" [2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87] Running
	I1212 22:22:25.261974  101931 system_pods.go:61] "kube-apiserver-multinode-764961" [9570752b-45ee-405d-a6e2-fc0b9aa28c7b] Running
	I1212 22:22:25.261983  101931 system_pods.go:61] "kube-controller-manager-multinode-764961" [01087a6d-6662-4b6c-8793-8a9da414ac2e] Running
	I1212 22:22:25.261996  101931 system_pods.go:61] "kube-proxy-smjqf" [00b947bc-a444-4666-a553-2d8a2c47b671] Running
	I1212 22:22:25.262002  101931 system_pods.go:61] "kube-scheduler-multinode-764961" [7f50f24e-9282-404b-9242-703201ac2c66] Running
	I1212 22:22:25.262008  101931 system_pods.go:61] "storage-provisioner" [3b49595a-49e0-4c15-b383-68af29aadc8f] Running
	I1212 22:22:25.262016  101931 system_pods.go:74] duration metric: took 186.667743ms to wait for pod list to return data ...
	I1212 22:22:25.262029  101931 default_sa.go:34] waiting for default service account to be created ...
	I1212 22:22:25.456434  101931 request.go:629] Waited for 194.329059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:22:25.456502  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1212 22:22:25.456508  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.456516  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.456529  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.458808  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:25.458827  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.458837  101931 round_trippers.go:580]     Content-Length: 261
	I1212 22:22:25.458845  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.458852  101931 round_trippers.go:580]     Audit-Id: 44439d5e-8b2c-4804-8877-5ce2b9aaca71
	I1212 22:22:25.458860  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.458867  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.458879  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.458887  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.458909  101931 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"eadba665-f461-4931-b166-f8fe293038c2","resourceVersion":"302","creationTimestamp":"2023-12-12T22:21:50Z"}}]}
	I1212 22:22:25.459135  101931 default_sa.go:45] found service account: "default"
	I1212 22:22:25.459155  101931 default_sa.go:55] duration metric: took 197.11662ms for default service account to be created ...
	I1212 22:22:25.459164  101931 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 22:22:25.656597  101931 request.go:629] Waited for 197.366104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:22:25.656668  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:22:25.656673  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.656681  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.656695  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.659917  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:22:25.659937  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.659952  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.659958  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.659963  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.659969  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.659974  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.659979  101931 round_trippers.go:580]     Audit-Id: 5adfbdd8-0e42-46cb-9258-a26201f72ade
	I1212 22:22:25.660405  101931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"403","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1212 22:22:25.662044  101931 system_pods.go:86] 8 kube-system pods found
	I1212 22:22:25.662067  101931 system_pods.go:89] "coredns-5dd5756b68-b6lvq" [b130b370-8465-4b9a-973d-8ff1bb6df10a] Running
	I1212 22:22:25.662074  101931 system_pods.go:89] "etcd-multinode-764961" [5295004b-e5f0-4870-9c31-a49e4912eb6b] Running
	I1212 22:22:25.662080  101931 system_pods.go:89] "kindnet-5fp6n" [2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87] Running
	I1212 22:22:25.662086  101931 system_pods.go:89] "kube-apiserver-multinode-764961" [9570752b-45ee-405d-a6e2-fc0b9aa28c7b] Running
	I1212 22:22:25.662094  101931 system_pods.go:89] "kube-controller-manager-multinode-764961" [01087a6d-6662-4b6c-8793-8a9da414ac2e] Running
	I1212 22:22:25.662104  101931 system_pods.go:89] "kube-proxy-smjqf" [00b947bc-a444-4666-a553-2d8a2c47b671] Running
	I1212 22:22:25.662114  101931 system_pods.go:89] "kube-scheduler-multinode-764961" [7f50f24e-9282-404b-9242-703201ac2c66] Running
	I1212 22:22:25.662123  101931 system_pods.go:89] "storage-provisioner" [3b49595a-49e0-4c15-b383-68af29aadc8f] Running
	I1212 22:22:25.662133  101931 system_pods.go:126] duration metric: took 202.962403ms to wait for k8s-apps to be running ...
	I1212 22:22:25.662150  101931 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:22:25.662197  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:22:25.672616  101931 system_svc.go:56] duration metric: took 10.459626ms WaitForService to wait for kubelet.
	I1212 22:22:25.672639  101931 kubeadm.go:581] duration metric: took 34.721386542s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:22:25.672660  101931 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:22:25.856000  101931 request.go:629] Waited for 183.268663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1212 22:22:25.856051  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1212 22:22:25.856056  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:25.856063  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:25.856072  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:25.858381  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:25.858403  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:25.858412  101931 round_trippers.go:580]     Audit-Id: df275c05-5f5e-4c31-ab78-314e91fe9e1b
	I1212 22:22:25.858420  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:25.858428  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:25.858437  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:25.858447  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:25.858459  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:25 GMT
	I1212 22:22:25.858571  101931 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"408"},"items":[{"metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1212 22:22:25.859042  101931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:22:25.859069  101931 node_conditions.go:123] node cpu capacity is 8
	I1212 22:22:25.859088  101931 node_conditions.go:105] duration metric: took 186.422502ms to run NodePressure ...
	I1212 22:22:25.859102  101931 start.go:228] waiting for startup goroutines ...
	I1212 22:22:25.859113  101931 start.go:233] waiting for cluster config update ...
	I1212 22:22:25.859125  101931 start.go:242] writing updated cluster config ...
	I1212 22:22:25.861761  101931 out.go:177] 
	I1212 22:22:25.863207  101931 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:22:25.863279  101931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json ...
	I1212 22:22:25.865126  101931 out.go:177] * Starting worker node multinode-764961-m02 in cluster multinode-764961
	I1212 22:22:25.867003  101931 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:22:25.868596  101931 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:22:25.870064  101931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:22:25.870086  101931 cache.go:56] Caching tarball of preloaded images
	I1212 22:22:25.870162  101931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:22:25.870181  101931 preload.go:174] Found /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 22:22:25.870190  101931 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1212 22:22:25.870254  101931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json ...
	I1212 22:22:25.886062  101931 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 22:22:25.886083  101931 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	I1212 22:22:25.886105  101931 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:22:25.886140  101931 start.go:365] acquiring machines lock for multinode-764961-m02: {Name:mkacda637706dea010632d74bcf924c3826aecc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:22:25.886255  101931 start.go:369] acquired machines lock for "multinode-764961-m02" in 92.373µs
	I1212 22:22:25.886286  101931 start.go:93] Provisioning new machine with config: &{Name:multinode-764961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:22:25.886381  101931 start.go:125] createHost starting for "m02" (driver="docker")
	I1212 22:22:25.888387  101931 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1212 22:22:25.888482  101931 start.go:159] libmachine.API.Create for "multinode-764961" (driver="docker")
	I1212 22:22:25.888505  101931 client.go:168] LocalClient.Create starting
	I1212 22:22:25.888557  101931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem
	I1212 22:22:25.888588  101931 main.go:141] libmachine: Decoding PEM data...
	I1212 22:22:25.888609  101931 main.go:141] libmachine: Parsing certificate...
	I1212 22:22:25.888670  101931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem
	I1212 22:22:25.888695  101931 main.go:141] libmachine: Decoding PEM data...
	I1212 22:22:25.888714  101931 main.go:141] libmachine: Parsing certificate...
	I1212 22:22:25.888934  101931 cli_runner.go:164] Run: docker network inspect multinode-764961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:22:25.904673  101931 network_create.go:77] Found existing network {name:multinode-764961 subnet:0xc003142c00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1212 22:22:25.904706  101931 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-764961-m02" container
	I1212 22:22:25.904758  101931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 22:22:25.919305  101931 cli_runner.go:164] Run: docker volume create multinode-764961-m02 --label name.minikube.sigs.k8s.io=multinode-764961-m02 --label created_by.minikube.sigs.k8s.io=true
	I1212 22:22:25.935096  101931 oci.go:103] Successfully created a docker volume multinode-764961-m02
	I1212 22:22:25.935162  101931 cli_runner.go:164] Run: docker run --rm --name multinode-764961-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-764961-m02 --entrypoint /usr/bin/test -v multinode-764961-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -d /var/lib
	I1212 22:22:26.433736  101931 oci.go:107] Successfully prepared a docker volume multinode-764961-m02
	I1212 22:22:26.433776  101931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:22:26.433801  101931 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 22:22:26.433860  101931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-764961-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 22:22:31.473412  101931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-764961-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 -I lz4 -xf /preloaded.tar -C /extractDir: (5.039494994s)
	I1212 22:22:31.473445  101931 kic.go:203] duration metric: took 5.039641 seconds to extract preloaded images to volume
	W1212 22:22:31.473597  101931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 22:22:31.473718  101931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 22:22:31.523862  101931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-764961-m02 --name multinode-764961-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-764961-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-764961-m02 --network multinode-764961 --ip 192.168.58.3 --volume multinode-764961-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:22:31.833462  101931 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Running}}
	I1212 22:22:31.850364  101931 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Status}}
	I1212 22:22:31.867754  101931 cli_runner.go:164] Run: docker exec multinode-764961-m02 stat /var/lib/dpkg/alternatives/iptables
	I1212 22:22:31.907052  101931 oci.go:144] the created container "multinode-764961-m02" has a running status.
	I1212 22:22:31.907091  101931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa...
	I1212 22:22:32.048297  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 22:22:32.048350  101931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 22:22:32.068250  101931 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Status}}
	I1212 22:22:32.090013  101931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 22:22:32.090032  101931 kic_runner.go:114] Args: [docker exec --privileged multinode-764961-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 22:22:32.156343  101931 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Status}}
	I1212 22:22:32.174869  101931 machine.go:88] provisioning docker machine ...
	I1212 22:22:32.174916  101931 ubuntu.go:169] provisioning hostname "multinode-764961-m02"
	I1212 22:22:32.174993  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:32.194809  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:22:32.195305  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1212 22:22:32.195337  101931 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-764961-m02 && echo "multinode-764961-m02" | sudo tee /etc/hostname
	I1212 22:22:32.196104  101931 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44550->127.0.0.1:32852: read: connection reset by peer
	I1212 22:22:35.325208  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-764961-m02
	
	I1212 22:22:35.325283  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:35.341462  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:22:35.341814  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1212 22:22:35.341833  101931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-764961-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-764961-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-764961-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:22:35.463496  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:22:35.463534  101931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:22:35.463573  101931 ubuntu.go:177] setting up certificates
	I1212 22:22:35.463586  101931 provision.go:83] configureAuth start
	I1212 22:22:35.463649  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961-m02
	I1212 22:22:35.480814  101931 provision.go:138] copyHostCerts
	I1212 22:22:35.480863  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:22:35.480900  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem, removing ...
	I1212 22:22:35.480910  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:22:35.480984  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:22:35.481066  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:22:35.481090  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem, removing ...
	I1212 22:22:35.481097  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:22:35.481133  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:22:35.481189  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:22:35.481212  101931 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem, removing ...
	I1212 22:22:35.481221  101931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:22:35.481258  101931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:22:35.481320  101931 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.multinode-764961-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-764961-m02]
	I1212 22:22:35.679621  101931 provision.go:172] copyRemoteCerts
	I1212 22:22:35.679679  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:22:35.679709  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:35.696011  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:22:35.787470  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 22:22:35.787535  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:22:35.808503  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 22:22:35.808567  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 22:22:35.828815  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 22:22:35.828878  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 22:22:35.849087  101931 provision.go:86] duration metric: configureAuth took 385.489286ms
	I1212 22:22:35.849116  101931 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:22:35.849306  101931 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:22:35.849413  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:35.865002  101931 main.go:141] libmachine: Using SSH client type: native
	I1212 22:22:35.865308  101931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1212 22:22:35.865324  101931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:22:36.068998  101931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:22:36.069029  101931 machine.go:91] provisioned docker machine in 3.894127784s
	I1212 22:22:36.069042  101931 client.go:171] LocalClient.Create took 10.180529311s
	I1212 22:22:36.069059  101931 start.go:167] duration metric: libmachine.API.Create for "multinode-764961" took 10.180580154s
	I1212 22:22:36.069076  101931 start.go:300] post-start starting for "multinode-764961-m02" (driver="docker")
	I1212 22:22:36.069088  101931 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:22:36.069138  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:22:36.069171  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:36.086076  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:22:36.176033  101931 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:22:36.178868  101931 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1212 22:22:36.178882  101931 command_runner.go:130] > NAME="Ubuntu"
	I1212 22:22:36.178888  101931 command_runner.go:130] > VERSION_ID="22.04"
	I1212 22:22:36.178893  101931 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1212 22:22:36.178897  101931 command_runner.go:130] > VERSION_CODENAME=jammy
	I1212 22:22:36.178901  101931 command_runner.go:130] > ID=ubuntu
	I1212 22:22:36.178905  101931 command_runner.go:130] > ID_LIKE=debian
	I1212 22:22:36.178909  101931 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1212 22:22:36.178914  101931 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1212 22:22:36.178920  101931 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1212 22:22:36.178926  101931 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1212 22:22:36.178930  101931 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1212 22:22:36.178980  101931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:22:36.179009  101931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:22:36.179021  101931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:22:36.179027  101931 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1212 22:22:36.179038  101931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:22:36.179085  101931 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:22:36.179149  101931 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> 163992.pem in /etc/ssl/certs
	I1212 22:22:36.179157  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /etc/ssl/certs/163992.pem
	I1212 22:22:36.179231  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:22:36.186767  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:22:36.207983  101931 start.go:303] post-start completed in 138.893174ms
	I1212 22:22:36.208312  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961-m02
	I1212 22:22:36.225491  101931 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/config.json ...
	I1212 22:22:36.225729  101931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:22:36.225777  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:36.241461  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:22:36.327784  101931 command_runner.go:130] > 21%!
	(MISSING)I1212 22:22:36.328078  101931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:22:36.331762  101931 command_runner.go:130] > 232G
	I1212 22:22:36.331952  101931 start.go:128] duration metric: createHost completed in 10.445559473s
	I1212 22:22:36.331970  101931 start.go:83] releasing machines lock for "multinode-764961-m02", held for 10.445700048s
	I1212 22:22:36.332048  101931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961-m02
	I1212 22:22:36.349209  101931 out.go:177] * Found network options:
	I1212 22:22:36.350551  101931 out.go:177]   - NO_PROXY=192.168.58.2
	W1212 22:22:36.351868  101931 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 22:22:36.351905  101931 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 22:22:36.351976  101931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:22:36.352024  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:36.352037  101931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:22:36.352084  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:22:36.367576  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:22:36.369190  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:22:36.545258  101931 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 22:22:36.588357  101931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:22:36.592610  101931 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1212 22:22:36.592634  101931 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1212 22:22:36.592640  101931 command_runner.go:130] > Device: b0h/176d	Inode: 570028      Links: 1
	I1212 22:22:36.592646  101931 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:22:36.592652  101931 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1212 22:22:36.592660  101931 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1212 22:22:36.592665  101931 command_runner.go:130] > Change: 2023-12-12 22:02:56.298816728 +0000
	I1212 22:22:36.592671  101931 command_runner.go:130] >  Birth: 2023-12-12 22:02:56.298816728 +0000
	I1212 22:22:36.592870  101931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:22:36.610776  101931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:22:36.610852  101931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:22:36.636114  101931 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1212 22:22:36.636160  101931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1212 22:22:36.636167  101931 start.go:475] detecting cgroup driver to use...
	I1212 22:22:36.636192  101931 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:22:36.636230  101931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:22:36.649150  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:22:36.658585  101931 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:22:36.658630  101931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:22:36.669782  101931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:22:36.682025  101931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 22:22:36.755975  101931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:22:36.832355  101931 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1212 22:22:36.832390  101931 docker.go:219] disabling docker service ...
	I1212 22:22:36.832438  101931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:22:36.848446  101931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:22:36.858409  101931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:22:36.868074  101931 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1212 22:22:36.931711  101931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:22:36.943652  101931 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1212 22:22:37.012364  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:22:37.022359  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:22:37.035467  101931 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 22:22:37.036349  101931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1212 22:22:37.036406  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:22:37.044581  101931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 22:22:37.044633  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:22:37.052825  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:22:37.060941  101931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:22:37.069112  101931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 22:22:37.076712  101931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 22:22:37.084225  101931 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 22:22:37.084291  101931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 22:22:37.091670  101931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 22:22:37.162312  101931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 22:22:37.266838  101931 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 22:22:37.266904  101931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 22:22:37.270036  101931 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 22:22:37.270058  101931 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 22:22:37.270073  101931 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1212 22:22:37.270080  101931 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:22:37.270085  101931 command_runner.go:130] > Access: 2023-12-12 22:22:37.255134075 +0000
	I1212 22:22:37.270092  101931 command_runner.go:130] > Modify: 2023-12-12 22:22:37.255134075 +0000
	I1212 22:22:37.270103  101931 command_runner.go:130] > Change: 2023-12-12 22:22:37.255134075 +0000
	I1212 22:22:37.270113  101931 command_runner.go:130] >  Birth: -
	I1212 22:22:37.270153  101931 start.go:543] Will wait 60s for crictl version
	I1212 22:22:37.270187  101931 ssh_runner.go:195] Run: which crictl
	I1212 22:22:37.272963  101931 command_runner.go:130] > /usr/bin/crictl
	I1212 22:22:37.273059  101931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 22:22:37.303594  101931 command_runner.go:130] > Version:  0.1.0
	I1212 22:22:37.303618  101931 command_runner.go:130] > RuntimeName:  cri-o
	I1212 22:22:37.303628  101931 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1212 22:22:37.303634  101931 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 22:22:37.303648  101931 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1212 22:22:37.303703  101931 ssh_runner.go:195] Run: crio --version
	I1212 22:22:37.335411  101931 command_runner.go:130] > crio version 1.24.6
	I1212 22:22:37.335435  101931 command_runner.go:130] > Version:          1.24.6
	I1212 22:22:37.335455  101931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 22:22:37.335459  101931 command_runner.go:130] > GitTreeState:     clean
	I1212 22:22:37.335465  101931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 22:22:37.335469  101931 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 22:22:37.335475  101931 command_runner.go:130] > Compiler:         gc
	I1212 22:22:37.335482  101931 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:22:37.335492  101931 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:22:37.335506  101931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:22:37.335517  101931 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:22:37.335524  101931 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:22:37.335614  101931 ssh_runner.go:195] Run: crio --version
	I1212 22:22:37.368406  101931 command_runner.go:130] > crio version 1.24.6
	I1212 22:22:37.368433  101931 command_runner.go:130] > Version:          1.24.6
	I1212 22:22:37.368442  101931 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1212 22:22:37.368449  101931 command_runner.go:130] > GitTreeState:     clean
	I1212 22:22:37.368456  101931 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1212 22:22:37.368464  101931 command_runner.go:130] > GoVersion:        go1.18.2
	I1212 22:22:37.368470  101931 command_runner.go:130] > Compiler:         gc
	I1212 22:22:37.368489  101931 command_runner.go:130] > Platform:         linux/amd64
	I1212 22:22:37.368502  101931 command_runner.go:130] > Linkmode:         dynamic
	I1212 22:22:37.368515  101931 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1212 22:22:37.368526  101931 command_runner.go:130] > SeccompEnabled:   true
	I1212 22:22:37.368536  101931 command_runner.go:130] > AppArmorEnabled:  false
	I1212 22:22:37.371372  101931 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1212 22:22:37.372904  101931 out.go:177]   - env NO_PROXY=192.168.58.2
	I1212 22:22:37.374188  101931 cli_runner.go:164] Run: docker network inspect multinode-764961 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 22:22:37.390986  101931 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1212 22:22:37.394271  101931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:22:37.403732  101931 certs.go:56] Setting up /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961 for IP: 192.168.58.3
	I1212 22:22:37.403763  101931 certs.go:190] acquiring lock for shared ca certs: {Name:mkef1e7b14f91e4f04d1e9cbbafdc8c42ba43b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 22:22:37.403876  101931 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key
	I1212 22:22:37.403920  101931 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key
	I1212 22:22:37.403932  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 22:22:37.403945  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 22:22:37.403956  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 22:22:37.403968  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 22:22:37.404009  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem (1338 bytes)
	W1212 22:22:37.404043  101931 certs.go:433] ignoring /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399_empty.pem, impossibly tiny 0 bytes
	I1212 22:22:37.404053  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem (1679 bytes)
	I1212 22:22:37.404078  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem (1082 bytes)
	I1212 22:22:37.404101  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem (1123 bytes)
	I1212 22:22:37.404122  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem (1675 bytes)
	I1212 22:22:37.404181  101931 certs.go:437] found cert: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:22:37.404209  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem -> /usr/share/ca-certificates/16399.pem
	I1212 22:22:37.404222  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> /usr/share/ca-certificates/163992.pem
	I1212 22:22:37.404233  101931 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:22:37.404579  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 22:22:37.426241  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 22:22:37.447634  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 22:22:37.467777  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 22:22:37.490735  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/16399.pem --> /usr/share/ca-certificates/16399.pem (1338 bytes)
	I1212 22:22:37.511055  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /usr/share/ca-certificates/163992.pem (1708 bytes)
	I1212 22:22:37.530817  101931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 22:22:37.551038  101931 ssh_runner.go:195] Run: openssl version
	I1212 22:22:37.555398  101931 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1212 22:22:37.555560  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16399.pem && ln -fs /usr/share/ca-certificates/16399.pem /etc/ssl/certs/16399.pem"
	I1212 22:22:37.563323  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16399.pem
	I1212 22:22:37.566147  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 22:08 /usr/share/ca-certificates/16399.pem
	I1212 22:22:37.566173  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 22:08 /usr/share/ca-certificates/16399.pem
	I1212 22:22:37.566201  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16399.pem
	I1212 22:22:37.571780  101931 command_runner.go:130] > 51391683
	I1212 22:22:37.571965  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16399.pem /etc/ssl/certs/51391683.0"
	I1212 22:22:37.579632  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163992.pem && ln -fs /usr/share/ca-certificates/163992.pem /etc/ssl/certs/163992.pem"
	I1212 22:22:37.587287  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163992.pem
	I1212 22:22:37.590060  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 22:08 /usr/share/ca-certificates/163992.pem
	I1212 22:22:37.590111  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 22:08 /usr/share/ca-certificates/163992.pem
	I1212 22:22:37.590160  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163992.pem
	I1212 22:22:37.596649  101931 command_runner.go:130] > 3ec20f2e
	I1212 22:22:37.596725  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163992.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 22:22:37.605334  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 22:22:37.613679  101931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:22:37.616573  101931 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:22:37.616606  101931 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 22:03 /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:22:37.616645  101931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 22:22:37.622809  101931 command_runner.go:130] > b5213941
	I1212 22:22:37.622862  101931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 22:22:37.631580  101931 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 22:22:37.634330  101931 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:22:37.634391  101931 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 22:22:37.634467  101931 ssh_runner.go:195] Run: crio config
	I1212 22:22:37.667519  101931 command_runner.go:130] ! time="2023-12-12 22:22:37.667072446Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1212 22:22:37.667575  101931 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 22:22:37.671539  101931 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 22:22:37.671584  101931 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 22:22:37.671595  101931 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 22:22:37.671601  101931 command_runner.go:130] > #
	I1212 22:22:37.671610  101931 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 22:22:37.671619  101931 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 22:22:37.671626  101931 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 22:22:37.671638  101931 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 22:22:37.671645  101931 command_runner.go:130] > # reload'.
	I1212 22:22:37.671651  101931 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 22:22:37.671660  101931 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 22:22:37.671668  101931 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 22:22:37.671677  101931 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 22:22:37.671681  101931 command_runner.go:130] > [crio]
	I1212 22:22:37.671687  101931 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 22:22:37.671695  101931 command_runner.go:130] > # containers images, in this directory.
	I1212 22:22:37.671703  101931 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 22:22:37.671712  101931 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 22:22:37.671718  101931 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1212 22:22:37.671726  101931 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 22:22:37.671733  101931 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 22:22:37.671740  101931 command_runner.go:130] > # storage_driver = "vfs"
	I1212 22:22:37.671746  101931 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 22:22:37.671764  101931 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 22:22:37.671769  101931 command_runner.go:130] > # storage_option = [
	I1212 22:22:37.671772  101931 command_runner.go:130] > # ]
	I1212 22:22:37.671780  101931 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 22:22:37.671787  101931 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 22:22:37.671793  101931 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 22:22:37.671799  101931 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 22:22:37.671806  101931 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 22:22:37.671817  101931 command_runner.go:130] > # always happen on a node reboot
	I1212 22:22:37.671824  101931 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 22:22:37.671833  101931 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 22:22:37.671839  101931 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 22:22:37.671856  101931 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 22:22:37.671865  101931 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1212 22:22:37.671873  101931 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 22:22:37.671883  101931 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 22:22:37.671887  101931 command_runner.go:130] > # internal_wipe = true
	I1212 22:22:37.671895  101931 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 22:22:37.671902  101931 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 22:22:37.671910  101931 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 22:22:37.671915  101931 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 22:22:37.671923  101931 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 22:22:37.671928  101931 command_runner.go:130] > [crio.api]
	I1212 22:22:37.671933  101931 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 22:22:37.671940  101931 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 22:22:37.671946  101931 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 22:22:37.671953  101931 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 22:22:37.671960  101931 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 22:22:37.671968  101931 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 22:22:37.671972  101931 command_runner.go:130] > # stream_port = "0"
	I1212 22:22:37.671980  101931 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 22:22:37.671985  101931 command_runner.go:130] > # stream_enable_tls = false
	I1212 22:22:37.671991  101931 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 22:22:37.671997  101931 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 22:22:37.672004  101931 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 22:22:37.672013  101931 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 22:22:37.672017  101931 command_runner.go:130] > # minutes.
	I1212 22:22:37.672021  101931 command_runner.go:130] > # stream_tls_cert = ""
	I1212 22:22:37.672027  101931 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 22:22:37.672035  101931 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 22:22:37.672040  101931 command_runner.go:130] > # stream_tls_key = ""
	I1212 22:22:37.672046  101931 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 22:22:37.672054  101931 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 22:22:37.672060  101931 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 22:22:37.672072  101931 command_runner.go:130] > # stream_tls_ca = ""
	I1212 22:22:37.672082  101931 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:22:37.672087  101931 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 22:22:37.672096  101931 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1212 22:22:37.672104  101931 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 22:22:37.672131  101931 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 22:22:37.672143  101931 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 22:22:37.672148  101931 command_runner.go:130] > [crio.runtime]
	I1212 22:22:37.672154  101931 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 22:22:37.672162  101931 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 22:22:37.672166  101931 command_runner.go:130] > # "nofile=1024:2048"
	I1212 22:22:37.672174  101931 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 22:22:37.672178  101931 command_runner.go:130] > # default_ulimits = [
	I1212 22:22:37.672184  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672190  101931 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 22:22:37.672197  101931 command_runner.go:130] > # no_pivot = false
	I1212 22:22:37.672202  101931 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 22:22:37.672211  101931 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 22:22:37.672216  101931 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 22:22:37.672225  101931 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 22:22:37.672230  101931 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 22:22:37.672239  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:22:37.672243  101931 command_runner.go:130] > # conmon = ""
	I1212 22:22:37.672249  101931 command_runner.go:130] > # Cgroup setting for conmon
	I1212 22:22:37.672256  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 22:22:37.672262  101931 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 22:22:37.672269  101931 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 22:22:37.672274  101931 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 22:22:37.672286  101931 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 22:22:37.672296  101931 command_runner.go:130] > # conmon_env = [
	I1212 22:22:37.672303  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672331  101931 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 22:22:37.672339  101931 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 22:22:37.672345  101931 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 22:22:37.672352  101931 command_runner.go:130] > # default_env = [
	I1212 22:22:37.672357  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672365  101931 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 22:22:37.672370  101931 command_runner.go:130] > # selinux = false
	I1212 22:22:37.672376  101931 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 22:22:37.672383  101931 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 22:22:37.672391  101931 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 22:22:37.672397  101931 command_runner.go:130] > # seccomp_profile = ""
	I1212 22:22:37.672403  101931 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 22:22:37.672411  101931 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 22:22:37.672417  101931 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 22:22:37.672424  101931 command_runner.go:130] > # which might increase security.
	I1212 22:22:37.672429  101931 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1212 22:22:37.672437  101931 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 22:22:37.672447  101931 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 22:22:37.672455  101931 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 22:22:37.672464  101931 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 22:22:37.672469  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:22:37.672474  101931 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 22:22:37.672481  101931 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 22:22:37.672486  101931 command_runner.go:130] > # the cgroup blockio controller.
	I1212 22:22:37.672492  101931 command_runner.go:130] > # blockio_config_file = ""
	I1212 22:22:37.672498  101931 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 22:22:37.672505  101931 command_runner.go:130] > # irqbalance daemon.
	I1212 22:22:37.672510  101931 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 22:22:37.672521  101931 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 22:22:37.672526  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:22:37.672533  101931 command_runner.go:130] > # rdt_config_file = ""
	I1212 22:22:37.672538  101931 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 22:22:37.672545  101931 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 22:22:37.672552  101931 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 22:22:37.672559  101931 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 22:22:37.672565  101931 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 22:22:37.672574  101931 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 22:22:37.672578  101931 command_runner.go:130] > # will be added.
	I1212 22:22:37.672585  101931 command_runner.go:130] > # default_capabilities = [
	I1212 22:22:37.672589  101931 command_runner.go:130] > # 	"CHOWN",
	I1212 22:22:37.672598  101931 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 22:22:37.672602  101931 command_runner.go:130] > # 	"FSETID",
	I1212 22:22:37.672606  101931 command_runner.go:130] > # 	"FOWNER",
	I1212 22:22:37.672610  101931 command_runner.go:130] > # 	"SETGID",
	I1212 22:22:37.672616  101931 command_runner.go:130] > # 	"SETUID",
	I1212 22:22:37.672628  101931 command_runner.go:130] > # 	"SETPCAP",
	I1212 22:22:37.672636  101931 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 22:22:37.672640  101931 command_runner.go:130] > # 	"KILL",
	I1212 22:22:37.672643  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672651  101931 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 22:22:37.672660  101931 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 22:22:37.672665  101931 command_runner.go:130] > # add_inheritable_capabilities = true
	I1212 22:22:37.672671  101931 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 22:22:37.672679  101931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:22:37.672683  101931 command_runner.go:130] > # default_sysctls = [
	I1212 22:22:37.672688  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672693  101931 command_runner.go:130] > # List of devices on the host that a
	I1212 22:22:37.672702  101931 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 22:22:37.672706  101931 command_runner.go:130] > # allowed_devices = [
	I1212 22:22:37.672713  101931 command_runner.go:130] > # 	"/dev/fuse",
	I1212 22:22:37.672716  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672721  101931 command_runner.go:130] > # List of additional devices. specified as
	I1212 22:22:37.672743  101931 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 22:22:37.672752  101931 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 22:22:37.672758  101931 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 22:22:37.672764  101931 command_runner.go:130] > # additional_devices = [
	I1212 22:22:37.672768  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672773  101931 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 22:22:37.672779  101931 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 22:22:37.672784  101931 command_runner.go:130] > # 	"/etc/cdi",
	I1212 22:22:37.672788  101931 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 22:22:37.672794  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672800  101931 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 22:22:37.672806  101931 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 22:22:37.672812  101931 command_runner.go:130] > # Defaults to false.
	I1212 22:22:37.672818  101931 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 22:22:37.672828  101931 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 22:22:37.672836  101931 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 22:22:37.672840  101931 command_runner.go:130] > # hooks_dir = [
	I1212 22:22:37.672847  101931 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 22:22:37.672851  101931 command_runner.go:130] > # ]
	I1212 22:22:37.672857  101931 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 22:22:37.672865  101931 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 22:22:37.672871  101931 command_runner.go:130] > # its default mounts from the following two files:
	I1212 22:22:37.672876  101931 command_runner.go:130] > #
	I1212 22:22:37.672882  101931 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 22:22:37.672891  101931 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 22:22:37.672897  101931 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 22:22:37.672903  101931 command_runner.go:130] > #
	I1212 22:22:37.672909  101931 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 22:22:37.672918  101931 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 22:22:37.672924  101931 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 22:22:37.672932  101931 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 22:22:37.672935  101931 command_runner.go:130] > #
	I1212 22:22:37.672942  101931 command_runner.go:130] > # default_mounts_file = ""
	I1212 22:22:37.672948  101931 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 22:22:37.672955  101931 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 22:22:37.672962  101931 command_runner.go:130] > # pids_limit = 0
	I1212 22:22:37.672968  101931 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 22:22:37.672976  101931 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 22:22:37.672982  101931 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 22:22:37.672996  101931 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 22:22:37.673003  101931 command_runner.go:130] > # log_size_max = -1
	I1212 22:22:37.673009  101931 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1212 22:22:37.673016  101931 command_runner.go:130] > # log_to_journald = false
	I1212 22:22:37.673025  101931 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 22:22:37.673033  101931 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 22:22:37.673038  101931 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 22:22:37.673046  101931 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 22:22:37.673051  101931 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 22:22:37.673056  101931 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 22:22:37.673064  101931 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 22:22:37.673069  101931 command_runner.go:130] > # read_only = false
	I1212 22:22:37.673076  101931 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 22:22:37.673085  101931 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 22:22:37.673090  101931 command_runner.go:130] > # live configuration reload.
	I1212 22:22:37.673096  101931 command_runner.go:130] > # log_level = "info"
	I1212 22:22:37.673102  101931 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 22:22:37.673109  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:22:37.673114  101931 command_runner.go:130] > # log_filter = ""
	I1212 22:22:37.673122  101931 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 22:22:37.673128  101931 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 22:22:37.673135  101931 command_runner.go:130] > # separated by comma.
	I1212 22:22:37.673139  101931 command_runner.go:130] > # uid_mappings = ""
	I1212 22:22:37.673147  101931 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 22:22:37.673153  101931 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 22:22:37.673162  101931 command_runner.go:130] > # separated by comma.
	I1212 22:22:37.673167  101931 command_runner.go:130] > # gid_mappings = ""
	I1212 22:22:37.673175  101931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 22:22:37.673181  101931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:22:37.673190  101931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:22:37.673194  101931 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 22:22:37.673202  101931 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 22:22:37.673210  101931 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 22:22:37.673218  101931 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 22:22:37.673224  101931 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 22:22:37.673233  101931 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 22:22:37.673241  101931 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 22:22:37.673247  101931 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 22:22:37.673253  101931 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 22:22:37.673259  101931 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 22:22:37.673269  101931 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 22:22:37.673276  101931 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 22:22:37.673281  101931 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 22:22:37.673288  101931 command_runner.go:130] > # drop_infra_ctr = true
	I1212 22:22:37.673294  101931 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 22:22:37.673302  101931 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 22:22:37.673314  101931 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 22:22:37.673321  101931 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 22:22:37.673328  101931 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 22:22:37.673336  101931 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 22:22:37.673340  101931 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 22:22:37.673347  101931 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 22:22:37.673353  101931 command_runner.go:130] > # pinns_path = ""
	I1212 22:22:37.673359  101931 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 22:22:37.673368  101931 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1212 22:22:37.673374  101931 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1212 22:22:37.673380  101931 command_runner.go:130] > # default_runtime = "runc"
	I1212 22:22:37.673386  101931 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 22:22:37.673396  101931 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 22:22:37.673407  101931 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1212 22:22:37.673415  101931 command_runner.go:130] > # creation as a file is not desired either.
	I1212 22:22:37.673423  101931 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 22:22:37.673431  101931 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 22:22:37.673435  101931 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 22:22:37.673439  101931 command_runner.go:130] > # ]
	I1212 22:22:37.673447  101931 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 22:22:37.673455  101931 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 22:22:37.673463  101931 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1212 22:22:37.673472  101931 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1212 22:22:37.673476  101931 command_runner.go:130] > #
	I1212 22:22:37.673482  101931 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1212 22:22:37.673487  101931 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1212 22:22:37.673494  101931 command_runner.go:130] > #  runtime_type = "oci"
	I1212 22:22:37.673499  101931 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1212 22:22:37.673506  101931 command_runner.go:130] > #  privileged_without_host_devices = false
	I1212 22:22:37.673511  101931 command_runner.go:130] > #  allowed_annotations = []
	I1212 22:22:37.673517  101931 command_runner.go:130] > # Where:
	I1212 22:22:37.673522  101931 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1212 22:22:37.673528  101931 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1212 22:22:37.673537  101931 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 22:22:37.673543  101931 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 22:22:37.673550  101931 command_runner.go:130] > #   in $PATH.
	I1212 22:22:37.673556  101931 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1212 22:22:37.673564  101931 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 22:22:37.673571  101931 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1212 22:22:37.673577  101931 command_runner.go:130] > #   state.
	I1212 22:22:37.673583  101931 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 22:22:37.673592  101931 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 22:22:37.673600  101931 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 22:22:37.673606  101931 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 22:22:37.673615  101931 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 22:22:37.673621  101931 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 22:22:37.673626  101931 command_runner.go:130] > #   The currently recognized values are:
	I1212 22:22:37.673635  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 22:22:37.673642  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 22:22:37.673651  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 22:22:37.673658  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 22:22:37.673667  101931 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 22:22:37.673676  101931 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 22:22:37.673683  101931 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 22:22:37.673691  101931 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1212 22:22:37.673699  101931 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 22:22:37.673704  101931 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 22:22:37.673712  101931 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1212 22:22:37.673716  101931 command_runner.go:130] > runtime_type = "oci"
	I1212 22:22:37.673720  101931 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 22:22:37.673727  101931 command_runner.go:130] > runtime_config_path = ""
	I1212 22:22:37.673731  101931 command_runner.go:130] > monitor_path = ""
	I1212 22:22:37.673735  101931 command_runner.go:130] > monitor_cgroup = ""
	I1212 22:22:37.673742  101931 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 22:22:37.673767  101931 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1212 22:22:37.673775  101931 command_runner.go:130] > # running containers
	I1212 22:22:37.673779  101931 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1212 22:22:37.673788  101931 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1212 22:22:37.673796  101931 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1212 22:22:37.673802  101931 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1212 22:22:37.673808  101931 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1212 22:22:37.673813  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1212 22:22:37.673820  101931 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1212 22:22:37.673825  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1212 22:22:37.673832  101931 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1212 22:22:37.673837  101931 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1212 22:22:37.673846  101931 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 22:22:37.673851  101931 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 22:22:37.673860  101931 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 22:22:37.673867  101931 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 22:22:37.673877  101931 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 22:22:37.673883  101931 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 22:22:37.673894  101931 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 22:22:37.673905  101931 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 22:22:37.673911  101931 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 22:22:37.673921  101931 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 22:22:37.673925  101931 command_runner.go:130] > # Example:
	I1212 22:22:37.673930  101931 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 22:22:37.673937  101931 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 22:22:37.673942  101931 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 22:22:37.673950  101931 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 22:22:37.673954  101931 command_runner.go:130] > # cpuset = 0
	I1212 22:22:37.673961  101931 command_runner.go:130] > # cpushares = "0-1"
	I1212 22:22:37.673964  101931 command_runner.go:130] > # Where:
	I1212 22:22:37.673972  101931 command_runner.go:130] > # The workload name is workload-type.
	I1212 22:22:37.673979  101931 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 22:22:37.673987  101931 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 22:22:37.673992  101931 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 22:22:37.674002  101931 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 22:22:37.674008  101931 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 22:22:37.674014  101931 command_runner.go:130] > # 
	I1212 22:22:37.674020  101931 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 22:22:37.674026  101931 command_runner.go:130] > #
	I1212 22:22:37.674032  101931 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 22:22:37.674043  101931 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 22:22:37.674050  101931 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 22:22:37.674059  101931 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 22:22:37.674065  101931 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 22:22:37.674072  101931 command_runner.go:130] > [crio.image]
	I1212 22:22:37.674079  101931 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 22:22:37.674086  101931 command_runner.go:130] > # default_transport = "docker://"
	I1212 22:22:37.674092  101931 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 22:22:37.674101  101931 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:22:37.674105  101931 command_runner.go:130] > # global_auth_file = ""
	I1212 22:22:37.674110  101931 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 22:22:37.674116  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:22:37.674121  101931 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1212 22:22:37.674128  101931 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 22:22:37.674136  101931 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 22:22:37.674142  101931 command_runner.go:130] > # This option supports live configuration reload.
	I1212 22:22:37.674150  101931 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 22:22:37.674156  101931 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 22:22:37.674164  101931 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 22:22:37.674171  101931 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 22:22:37.674179  101931 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 22:22:37.674184  101931 command_runner.go:130] > # pause_command = "/pause"
	I1212 22:22:37.674192  101931 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 22:22:37.674199  101931 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 22:22:37.674211  101931 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 22:22:37.674220  101931 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 22:22:37.674225  101931 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 22:22:37.674232  101931 command_runner.go:130] > # signature_policy = ""
	I1212 22:22:37.674242  101931 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 22:22:37.674251  101931 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 22:22:37.674255  101931 command_runner.go:130] > # changing them here.
	I1212 22:22:37.674262  101931 command_runner.go:130] > # insecure_registries = [
	I1212 22:22:37.674266  101931 command_runner.go:130] > # ]
	I1212 22:22:37.674275  101931 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 22:22:37.674280  101931 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 22:22:37.674287  101931 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 22:22:37.674292  101931 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 22:22:37.674297  101931 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 22:22:37.674304  101931 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 22:22:37.674311  101931 command_runner.go:130] > # CNI plugins.
	I1212 22:22:37.674318  101931 command_runner.go:130] > [crio.network]
	I1212 22:22:37.674324  101931 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 22:22:37.674333  101931 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 22:22:37.674337  101931 command_runner.go:130] > # cni_default_network = ""
	I1212 22:22:37.674345  101931 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 22:22:37.674350  101931 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 22:22:37.674358  101931 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 22:22:37.674363  101931 command_runner.go:130] > # plugin_dirs = [
	I1212 22:22:37.674369  101931 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 22:22:37.674373  101931 command_runner.go:130] > # ]
	I1212 22:22:37.674381  101931 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 22:22:37.674388  101931 command_runner.go:130] > [crio.metrics]
	I1212 22:22:37.674399  101931 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 22:22:37.674407  101931 command_runner.go:130] > # enable_metrics = false
	I1212 22:22:37.674412  101931 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 22:22:37.674419  101931 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 22:22:37.674426  101931 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 22:22:37.674435  101931 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 22:22:37.674441  101931 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 22:22:37.674448  101931 command_runner.go:130] > # metrics_collectors = [
	I1212 22:22:37.674452  101931 command_runner.go:130] > # 	"operations",
	I1212 22:22:37.674460  101931 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 22:22:37.674465  101931 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 22:22:37.674471  101931 command_runner.go:130] > # 	"operations_errors",
	I1212 22:22:37.674476  101931 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 22:22:37.674480  101931 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 22:22:37.674487  101931 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 22:22:37.674492  101931 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 22:22:37.674499  101931 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 22:22:37.674503  101931 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 22:22:37.674508  101931 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 22:22:37.674512  101931 command_runner.go:130] > # 	"containers_oom_total",
	I1212 22:22:37.674519  101931 command_runner.go:130] > # 	"containers_oom",
	I1212 22:22:37.674523  101931 command_runner.go:130] > # 	"processes_defunct",
	I1212 22:22:37.674527  101931 command_runner.go:130] > # 	"operations_total",
	I1212 22:22:37.674534  101931 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 22:22:37.674539  101931 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 22:22:37.674545  101931 command_runner.go:130] > # 	"operations_errors_total",
	I1212 22:22:37.674552  101931 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 22:22:37.674557  101931 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 22:22:37.674564  101931 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 22:22:37.674569  101931 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 22:22:37.674578  101931 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 22:22:37.674582  101931 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 22:22:37.674588  101931 command_runner.go:130] > # ]
	I1212 22:22:37.674593  101931 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 22:22:37.674600  101931 command_runner.go:130] > # metrics_port = 9090
	I1212 22:22:37.674606  101931 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 22:22:37.674619  101931 command_runner.go:130] > # metrics_socket = ""
	I1212 22:22:37.674624  101931 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 22:22:37.674632  101931 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 22:22:37.674639  101931 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 22:22:37.674646  101931 command_runner.go:130] > # certificate on any modification event.
	I1212 22:22:37.674650  101931 command_runner.go:130] > # metrics_cert = ""
	I1212 22:22:37.674656  101931 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 22:22:37.674662  101931 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 22:22:37.674666  101931 command_runner.go:130] > # metrics_key = ""
	I1212 22:22:37.674679  101931 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 22:22:37.674686  101931 command_runner.go:130] > [crio.tracing]
	I1212 22:22:37.674692  101931 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 22:22:37.674699  101931 command_runner.go:130] > # enable_tracing = false
	I1212 22:22:37.674704  101931 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 22:22:37.674711  101931 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 22:22:37.674716  101931 command_runner.go:130] > # Number of samples to collect per million spans.
	I1212 22:22:37.674724  101931 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 22:22:37.674730  101931 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 22:22:37.674736  101931 command_runner.go:130] > [crio.stats]
	I1212 22:22:37.674742  101931 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 22:22:37.674750  101931 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 22:22:37.674755  101931 command_runner.go:130] > # stats_collection_period = 0
	I1212 22:22:37.674816  101931 cni.go:84] Creating CNI manager for ""
	I1212 22:22:37.674827  101931 cni.go:136] 2 nodes found, recommending kindnet
	I1212 22:22:37.674836  101931 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 22:22:37.674857  101931 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-764961 NodeName:multinode-764961-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 22:22:37.674971  101931 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-764961-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 22:22:37.675023  101931 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-764961-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 22:22:37.675069  101931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 22:22:37.682269  101931 command_runner.go:130] > kubeadm
	I1212 22:22:37.682289  101931 command_runner.go:130] > kubectl
	I1212 22:22:37.682295  101931 command_runner.go:130] > kubelet
	I1212 22:22:37.683008  101931 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 22:22:37.683058  101931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 22:22:37.690815  101931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1212 22:22:37.706362  101931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 22:22:37.722110  101931 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1212 22:22:37.725295  101931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 22:22:37.735406  101931 host.go:66] Checking if "multinode-764961" exists ...
	I1212 22:22:37.735688  101931 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:22:37.735681  101931 start.go:304] JoinCluster: &{Name:multinode-764961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-764961 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:22:37.735792  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 22:22:37.735845  101931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:22:37.752395  101931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:22:37.891643  101931 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7waew9.ng4fkslvn9i5naw3 --discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f 
	I1212 22:22:37.891707  101931 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:22:37.891734  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7waew9.ng4fkslvn9i5naw3 --discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-764961-m02"
	I1212 22:22:37.927199  101931 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 22:22:37.954976  101931 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1212 22:22:37.954997  101931 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-gcp
	I1212 22:22:37.955002  101931 command_runner.go:130] > OS: Linux
	I1212 22:22:37.955008  101931 command_runner.go:130] > CGROUPS_CPU: enabled
	I1212 22:22:37.955013  101931 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1212 22:22:37.955023  101931 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1212 22:22:37.955028  101931 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1212 22:22:37.955033  101931 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1212 22:22:37.955038  101931 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1212 22:22:37.955048  101931 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1212 22:22:37.955056  101931 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1212 22:22:37.955063  101931 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1212 22:22:38.030817  101931 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1212 22:22:38.030852  101931 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1212 22:22:38.054859  101931 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 22:22:38.054892  101931 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 22:22:38.054935  101931 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 22:22:38.133849  101931 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1212 22:22:40.148791  101931 command_runner.go:130] > This node has joined the cluster:
	I1212 22:22:40.148820  101931 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1212 22:22:40.148830  101931 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1212 22:22:40.148845  101931 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1212 22:22:40.151188  101931 command_runner.go:130] ! W1212 22:22:37.926828    1109 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1212 22:22:40.151222  101931 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-gcp\n", err: exit status 1
	I1212 22:22:40.151239  101931 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 22:22:40.151265  101931 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7waew9.ng4fkslvn9i5naw3 --discovery-token-ca-cert-hash sha256:aa3ecec07b62e5dcfcb0a79f1c47a3b26aad78a4a5259d0e8aeb1f187cfb752f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-764961-m02": (2.259515171s)
	I1212 22:22:40.151289  101931 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 22:22:40.316439  101931 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1212 22:22:40.316557  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f minikube.k8s.io/name=multinode-764961 minikube.k8s.io/updated_at=2023_12_12T22_22_40_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 22:22:40.383862  101931 command_runner.go:130] > node/multinode-764961-m02 labeled
	I1212 22:22:40.386216  101931 start.go:306] JoinCluster complete in 2.650532648s
	I1212 22:22:40.386243  101931 cni.go:84] Creating CNI manager for ""
	I1212 22:22:40.386249  101931 cni.go:136] 2 nodes found, recommending kindnet
	I1212 22:22:40.386293  101931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 22:22:40.389732  101931 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 22:22:40.389754  101931 command_runner.go:130] >   Size: 4085020   	Blocks: 7984       IO Block: 4096   regular file
	I1212 22:22:40.389761  101931 command_runner.go:130] > Device: 37h/55d	Inode: 573802      Links: 1
	I1212 22:22:40.389768  101931 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 22:22:40.389776  101931 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I1212 22:22:40.389786  101931 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I1212 22:22:40.389791  101931 command_runner.go:130] > Change: 2023-12-12 22:02:56.690856677 +0000
	I1212 22:22:40.389796  101931 command_runner.go:130] >  Birth: 2023-12-12 22:02:56.666854231 +0000
	I1212 22:22:40.389839  101931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 22:22:40.389851  101931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 22:22:40.405290  101931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 22:22:40.615896  101931 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:22:40.618227  101931 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 22:22:40.620531  101931 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 22:22:40.630576  101931 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 22:22:40.634965  101931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:22:40.635179  101931 kapi.go:59] client config for multinode-764961: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:22:40.635512  101931 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 22:22:40.635526  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:40.635534  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:40.635540  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:40.637787  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:40.637805  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:40.637812  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:40.637818  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:40.637824  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:40.637829  101931 round_trippers.go:580]     Content-Length: 291
	I1212 22:22:40.637834  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:40 GMT
	I1212 22:22:40.637849  101931 round_trippers.go:580]     Audit-Id: 58957ea3-5dcd-4e7d-b088-e6cb9c1b1876
	I1212 22:22:40.637856  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:40.637877  101931 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f85012d3-692f-41ec-aa39-a084333b6df8","resourceVersion":"407","creationTimestamp":"2023-12-12T22:21:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 22:22:40.637963  101931 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-764961" context rescaled to 1 replicas
	I1212 22:22:40.637990  101931 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 22:22:40.640764  101931 out.go:177] * Verifying Kubernetes components...
	I1212 22:22:40.642209  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:22:40.652881  101931 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:22:40.653142  101931 kapi.go:59] client config for multinode-764961: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/profiles/multinode-764961/client.key", CAFile:"/home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c267e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 22:22:40.653430  101931 node_ready.go:35] waiting up to 6m0s for node "multinode-764961-m02" to be "Ready" ...
	I1212 22:22:40.653506  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:40.653517  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:40.653528  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:40.653535  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:40.656127  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:40.656150  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:40.656159  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:40.656166  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:40.656174  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:40.656182  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:40 GMT
	I1212 22:22:40.656190  101931 round_trippers.go:580]     Audit-Id: 583bc327-a9f1-48fb-9e23-4a102ef6be5e
	I1212 22:22:40.656198  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:40.656382  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:40.656718  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:40.656733  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:40.656743  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:40.656751  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:40.658618  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:22:40.658636  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:40.658646  101931 round_trippers.go:580]     Audit-Id: 5d649c68-aa54-434d-869a-958ca346b77b
	I1212 22:22:40.658656  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:40.658665  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:40.658676  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:40.658684  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:40.658692  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:40 GMT
	I1212 22:22:40.658798  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:41.159591  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:41.159613  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:41.159620  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:41.159626  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:41.161700  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:41.161722  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:41.161731  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:41 GMT
	I1212 22:22:41.161739  101931 round_trippers.go:580]     Audit-Id: de414935-5272-4eb4-9eca-885d74892c2f
	I1212 22:22:41.161747  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:41.161754  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:41.161782  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:41.161794  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:41.161903  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:41.659688  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:41.659711  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:41.659722  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:41.659728  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:41.661842  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:41.661863  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:41.661873  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:41.661880  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:41.661886  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:41.661894  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:41.661902  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:41 GMT
	I1212 22:22:41.661910  101931 round_trippers.go:580]     Audit-Id: d4184d22-71e2-48eb-a9b6-9066450a81f6
	I1212 22:22:41.662030  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:42.159633  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:42.159655  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:42.159662  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:42.159668  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:42.161812  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:42.161832  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:42.161841  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:42.161849  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:42.161858  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:42.161867  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:42.161880  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:42 GMT
	I1212 22:22:42.161891  101931 round_trippers.go:580]     Audit-Id: dcbde99f-0de7-4781-81b5-00c541a15bd6
	I1212 22:22:42.162089  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:42.659657  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:42.659688  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:42.659700  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:42.659712  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:42.662013  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:42.662032  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:42.662038  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:42.662044  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:42 GMT
	I1212 22:22:42.662049  101931 round_trippers.go:580]     Audit-Id: b31b5c9f-ed33-42e0-b691-0d833c369d38
	I1212 22:22:42.662057  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:42.662065  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:42.662074  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:42.662246  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:42.662533  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:43.159923  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:43.159948  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:43.159961  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:43.159974  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:43.162136  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:43.162156  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:43.162165  101931 round_trippers.go:580]     Audit-Id: 2b6645e5-e391-4780-b365-640c772cfb26
	I1212 22:22:43.162176  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:43.162185  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:43.162193  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:43.162202  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:43.162214  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:43 GMT
	I1212 22:22:43.162326  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:43.659946  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:43.659974  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:43.659986  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:43.659994  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:43.662081  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:43.662106  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:43.662116  101931 round_trippers.go:580]     Audit-Id: 5e7870b2-c309-46ce-9a8b-7edf5ca399d6
	I1212 22:22:43.662124  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:43.662132  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:43.662142  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:43.662154  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:43.662163  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:43 GMT
	I1212 22:22:43.662284  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:44.159895  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:44.159916  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:44.159924  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:44.159931  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:44.162027  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:44.162048  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:44.162056  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:44.162064  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:44.162072  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:44.162081  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:44.162093  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:44 GMT
	I1212 22:22:44.162102  101931 round_trippers.go:580]     Audit-Id: d2187efa-7e55-4892-b0c9-a2777e0d1619
	I1212 22:22:44.162237  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:44.659847  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:44.659865  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:44.659873  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:44.659879  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:44.662024  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:44.662042  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:44.662049  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:44.662054  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:44.662060  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:44 GMT
	I1212 22:22:44.662065  101931 round_trippers.go:580]     Audit-Id: a91fe19f-d1e0-4742-aa75-895e13148dbe
	I1212 22:22:44.662070  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:44.662075  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:44.662234  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:44.662556  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:45.159898  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:45.159922  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:45.159931  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:45.159939  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:45.162177  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:45.162201  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:45.162208  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:45 GMT
	I1212 22:22:45.162213  101931 round_trippers.go:580]     Audit-Id: 7d6a57e5-0802-4b05-b873-61d833e736e8
	I1212 22:22:45.162218  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:45.162223  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:45.162228  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:45.162234  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:45.162450  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:45.660023  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:45.660060  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:45.660070  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:45.660079  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:45.662277  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:45.662297  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:45.662304  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:45.662309  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:45 GMT
	I1212 22:22:45.662314  101931 round_trippers.go:580]     Audit-Id: 21f0aaa1-3df9-4ca5-acfe-8bf9a83b969b
	I1212 22:22:45.662319  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:45.662324  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:45.662329  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:45.662421  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:46.159678  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:46.159700  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:46.159708  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:46.159715  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:46.161972  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:46.161999  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:46.162009  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:46 GMT
	I1212 22:22:46.162021  101931 round_trippers.go:580]     Audit-Id: 8bd8d530-e063-4107-8009-a22d4753b2a8
	I1212 22:22:46.162026  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:46.162032  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:46.162039  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:46.162050  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:46.162189  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:46.659264  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:46.659288  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:46.659296  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:46.659303  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:46.661865  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:46.661885  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:46.661891  101931 round_trippers.go:580]     Audit-Id: ca5d0d6b-088a-45ca-9833-5b55ed4df74a
	I1212 22:22:46.661897  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:46.661904  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:46.661913  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:46.661925  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:46.661936  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:46 GMT
	I1212 22:22:46.662112  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:47.159681  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:47.159702  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:47.159713  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:47.159721  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:47.161776  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:47.161796  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:47.161803  101931 round_trippers.go:580]     Audit-Id: 1ef9b350-71f6-4f19-947a-fe46bb0beaea
	I1212 22:22:47.161809  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:47.161814  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:47.161819  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:47.161829  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:47.161837  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:47 GMT
	I1212 22:22:47.162008  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:47.162405  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:47.659661  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:47.659680  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:47.659688  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:47.659694  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:47.661789  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:47.661809  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:47.661816  101931 round_trippers.go:580]     Audit-Id: 91c281a8-b68c-4435-9277-5e6d0764045a
	I1212 22:22:47.661821  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:47.661826  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:47.661831  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:47.661837  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:47.661842  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:47 GMT
	I1212 22:22:47.661980  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:48.159560  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:48.159586  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:48.159595  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:48.159601  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:48.161735  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:48.161753  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:48.161760  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:48.161765  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:48.161771  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:48.161776  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:48.161781  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:48 GMT
	I1212 22:22:48.161787  101931 round_trippers.go:580]     Audit-Id: b9e8d126-6865-477a-b3ae-28ac3f7c5c5b
	I1212 22:22:48.161884  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:48.659452  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:48.659473  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:48.659480  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:48.659486  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:48.661633  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:48.661650  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:48.661659  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:48.661665  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:48.661670  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:48.661675  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:48 GMT
	I1212 22:22:48.661680  101931 round_trippers.go:580]     Audit-Id: 9af7a3e0-ed2a-467f-9c51-78673e178023
	I1212 22:22:48.661685  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:48.661771  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:49.159277  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:49.159299  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:49.159307  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:49.159314  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:49.161494  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:49.161513  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:49.161520  101931 round_trippers.go:580]     Audit-Id: a4196834-f6b9-4b05-8493-fff26eb1defc
	I1212 22:22:49.161526  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:49.161532  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:49.161538  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:49.161544  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:49.161552  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:49 GMT
	I1212 22:22:49.161690  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:49.659259  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:49.659281  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:49.659290  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:49.659296  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:49.661461  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:49.661496  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:49.661506  101931 round_trippers.go:580]     Audit-Id: 74523a42-5fa4-4d8a-ba87-e29c30456baa
	I1212 22:22:49.661515  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:49.661523  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:49.661534  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:49.661544  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:49.661553  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:49 GMT
	I1212 22:22:49.661650  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"447","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5762 chars]
	I1212 22:22:49.661926  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:50.159227  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:50.159247  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:50.159254  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:50.159260  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:50.161437  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:50.161459  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:50.161468  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:50.161475  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:50 GMT
	I1212 22:22:50.161483  101931 round_trippers.go:580]     Audit-Id: 5db2e387-b578-43cf-85f9-181bc91fd903
	I1212 22:22:50.161491  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:50.161502  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:50.161510  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:50.161649  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:50.659248  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:50.659271  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:50.659279  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:50.659285  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:50.661642  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:50.661662  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:50.661671  101931 round_trippers.go:580]     Audit-Id: c4a2bf58-9e34-4641-9d70-53fef45b22f6
	I1212 22:22:50.661678  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:50.661685  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:50.661693  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:50.661703  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:50.661715  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:50 GMT
	I1212 22:22:50.661845  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:51.159896  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:51.159921  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:51.159929  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:51.159936  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:51.162442  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:51.162461  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:51.162468  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:51.162473  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:51.162478  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:51.162483  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:51.162489  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:51 GMT
	I1212 22:22:51.162494  101931 round_trippers.go:580]     Audit-Id: 4c86f311-7a72-4392-a2c2-cb0668ad7e91
	I1212 22:22:51.162630  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:51.659395  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:51.659415  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:51.659423  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:51.659429  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:51.661555  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:51.661577  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:51.661586  101931 round_trippers.go:580]     Audit-Id: 971043c1-28c4-4836-8aa9-1e8905d974cc
	I1212 22:22:51.661593  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:51.661598  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:51.661608  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:51.661617  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:51.661630  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:51 GMT
	I1212 22:22:51.661751  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:51.662057  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:52.159298  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:52.159318  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:52.159326  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:52.159333  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:52.161574  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:52.161593  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:52.161602  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:52.161611  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:52.161619  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:52 GMT
	I1212 22:22:52.161627  101931 round_trippers.go:580]     Audit-Id: 8369488e-8cd0-4630-81b0-d3a431c0910f
	I1212 22:22:52.161640  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:52.161648  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:52.161853  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:52.659362  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:52.659385  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:52.659395  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:52.659405  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:52.661572  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:52.661594  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:52.661604  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:52 GMT
	I1212 22:22:52.661612  101931 round_trippers.go:580]     Audit-Id: 1715a48c-88fb-4802-b1e0-5a8387de3e17
	I1212 22:22:52.661619  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:52.661626  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:52.661634  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:52.661645  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:52.661765  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:53.159275  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:53.159299  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:53.159307  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:53.159313  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:53.161596  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:53.161622  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:53.161633  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:53.161642  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:53 GMT
	I1212 22:22:53.161649  101931 round_trippers.go:580]     Audit-Id: 0aaa887e-faab-413f-9013-1fa6ac4f5da1
	I1212 22:22:53.161654  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:53.161661  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:53.161666  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:53.161790  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:53.659299  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:53.659320  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:53.659328  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:53.659333  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:53.663220  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:22:53.663258  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:53.663271  101931 round_trippers.go:580]     Audit-Id: 2943f924-286d-4198-9af6-1ce912b98fd4
	I1212 22:22:53.663281  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:53.663295  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:53.663326  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:53.663340  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:53.663352  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:53 GMT
	I1212 22:22:53.663521  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:53.663927  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:54.160038  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:54.160063  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:54.160075  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:54.160086  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:54.162281  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:54.162299  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:54.162306  101931 round_trippers.go:580]     Audit-Id: 16756835-aca6-49c3-ae62-b0a2c9cc27a2
	I1212 22:22:54.162312  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:54.162318  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:54.162323  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:54.162333  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:54.162338  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:54 GMT
	I1212 22:22:54.162514  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:54.660154  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:54.660177  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:54.660185  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:54.660191  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:54.662344  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:54.662371  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:54.662380  101931 round_trippers.go:580]     Audit-Id: d5dbca2f-246f-4b6d-a27c-f6e47c8f5e25
	I1212 22:22:54.662386  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:54.662391  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:54.662396  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:54.662401  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:54.662407  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:54 GMT
	I1212 22:22:54.662537  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:55.160210  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:55.160233  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:55.160241  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:55.160247  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:55.162315  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:55.162337  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:55.162344  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:55.162350  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:55.162355  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:55 GMT
	I1212 22:22:55.162360  101931 round_trippers.go:580]     Audit-Id: b7c387ad-fed1-4871-a79e-497e784aa17d
	I1212 22:22:55.162365  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:55.162373  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:55.162505  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:55.660179  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:55.660201  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:55.660209  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:55.660217  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:55.662468  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:55.662490  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:55.662501  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:55.662507  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:55.662512  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:55 GMT
	I1212 22:22:55.662517  101931 round_trippers.go:580]     Audit-Id: b1ed47fd-7b85-43f3-9d49-151472d0d24c
	I1212 22:22:55.662522  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:55.662527  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:55.662650  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:56.159246  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:56.159280  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:56.159289  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:56.159294  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:56.161460  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:56.161483  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:56.161490  101931 round_trippers.go:580]     Audit-Id: c9da1267-b038-4815-b7bb-f158f5e22845
	I1212 22:22:56.161498  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:56.161503  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:56.161509  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:56.161517  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:56.161522  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:56 GMT
	I1212 22:22:56.161674  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:56.161983  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:56.659576  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:56.659604  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:56.659612  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:56.659619  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:56.661838  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:56.661863  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:56.661873  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:56 GMT
	I1212 22:22:56.661881  101931 round_trippers.go:580]     Audit-Id: f840c8c1-0377-479d-a699-00866c66789f
	I1212 22:22:56.661888  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:56.661895  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:56.661903  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:56.661914  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:56.662082  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:57.159732  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:57.159755  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:57.159764  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:57.159770  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:57.161978  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:57.161999  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:57.162007  101931 round_trippers.go:580]     Audit-Id: 5f824755-19ca-4817-a3c3-7d45fc72f091
	I1212 22:22:57.162012  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:57.162017  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:57.162022  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:57.162027  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:57.162032  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:57 GMT
	I1212 22:22:57.162166  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:57.659780  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:57.659800  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:57.659808  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:57.659814  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:57.662024  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:57.662045  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:57.662052  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:57 GMT
	I1212 22:22:57.662058  101931 round_trippers.go:580]     Audit-Id: 095a3b73-093c-47c5-a106-a4e68c2d7c54
	I1212 22:22:57.662063  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:57.662069  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:57.662078  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:57.662086  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:57.662228  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:58.159911  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:58.159935  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:58.159943  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:58.159949  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:58.162202  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:58.162224  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:58.162234  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:58.162242  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:58.162251  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:58.162258  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:58 GMT
	I1212 22:22:58.162265  101931 round_trippers.go:580]     Audit-Id: 632b1851-5767-4ee1-9e0e-344735c5ce09
	I1212 22:22:58.162278  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:58.162403  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:58.162698  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:22:58.659997  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:58.660017  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:58.660025  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:58.660031  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:58.662260  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:58.662284  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:58.662294  101931 round_trippers.go:580]     Audit-Id: 2ccf41aa-9a75-4b6e-893c-b0355f37b16b
	I1212 22:22:58.662303  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:58.662309  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:58.662315  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:58.662320  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:58.662332  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:58 GMT
	I1212 22:22:58.662570  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:59.159992  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:59.160012  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:59.160021  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:59.160028  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:59.162040  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:59.162059  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:59.162066  101931 round_trippers.go:580]     Audit-Id: f6f87943-ce45-4144-a6c8-c17cb52edf9d
	I1212 22:22:59.162074  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:59.162079  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:59.162084  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:59.162090  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:59.162095  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:59 GMT
	I1212 22:22:59.162211  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:22:59.659786  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:22:59.659807  101931 round_trippers.go:469] Request Headers:
	I1212 22:22:59.659814  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:22:59.659820  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:22:59.661969  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:22:59.661992  101931 round_trippers.go:577] Response Headers:
	I1212 22:22:59.662002  101931 round_trippers.go:580]     Audit-Id: cbc30488-2d7c-4d05-8413-b09e905031ce
	I1212 22:22:59.662011  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:22:59.662020  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:22:59.662032  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:22:59.662039  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:22:59.662044  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:22:59 GMT
	I1212 22:22:59.662164  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:00.159609  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:00.159636  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:00.159646  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:00.159654  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:00.161816  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:00.161835  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:00.161850  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:00 GMT
	I1212 22:23:00.161856  101931 round_trippers.go:580]     Audit-Id: 3ed0f782-b978-4820-9677-7764b86aeeec
	I1212 22:23:00.161861  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:00.161866  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:00.161871  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:00.161879  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:00.162037  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:00.659261  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:00.659286  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:00.659295  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:00.659301  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:00.661433  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:00.661459  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:00.661469  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:00.661479  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:00 GMT
	I1212 22:23:00.661486  101931 round_trippers.go:580]     Audit-Id: 2ecb1962-6ae2-4e05-b55b-02025bc78310
	I1212 22:23:00.661494  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:00.661508  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:00.661516  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:00.661632  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:00.661935  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:01.160157  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:01.160178  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:01.160188  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:01.160197  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:01.162697  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:01.162722  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:01.162732  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:01.162740  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:01 GMT
	I1212 22:23:01.162748  101931 round_trippers.go:580]     Audit-Id: 7e5d3d4b-df57-44e5-94ed-d7a3ce662600
	I1212 22:23:01.162756  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:01.162769  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:01.162778  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:01.162926  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:01.659594  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:01.659615  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:01.659623  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:01.659631  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:01.661897  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:01.661915  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:01.661921  101931 round_trippers.go:580]     Audit-Id: 68504a6a-f95d-46c5-b064-ed18c0d6d217
	I1212 22:23:01.661927  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:01.661932  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:01.661940  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:01.661948  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:01.661956  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:01 GMT
	I1212 22:23:01.662082  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:02.159638  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:02.159659  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:02.159667  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:02.159673  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:02.161757  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:02.161773  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:02.161780  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:02 GMT
	I1212 22:23:02.161785  101931 round_trippers.go:580]     Audit-Id: ce29897b-2a2f-4d9c-84cc-b668f4c8328b
	I1212 22:23:02.161791  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:02.161799  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:02.161808  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:02.161819  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:02.161952  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:02.659314  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:02.659339  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:02.659352  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:02.659360  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:02.661940  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:02.661959  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:02.661965  101931 round_trippers.go:580]     Audit-Id: 9f366794-9a76-4492-b3f8-74afdb9e1a02
	I1212 22:23:02.661971  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:02.661976  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:02.661981  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:02.661986  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:02.662007  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:02 GMT
	I1212 22:23:02.662181  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:02.662497  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:03.159794  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:03.159815  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:03.159823  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:03.159829  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:03.162018  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:03.162038  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:03.162044  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:03.162050  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:03.162055  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:03 GMT
	I1212 22:23:03.162061  101931 round_trippers.go:580]     Audit-Id: 03c7d553-5921-487a-8ac3-ad910f6cc31e
	I1212 22:23:03.162066  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:03.162071  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:03.162213  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:03.659904  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:03.659926  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:03.659938  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:03.659945  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:03.661995  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:03.662017  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:03.662026  101931 round_trippers.go:580]     Audit-Id: d9895a1d-5237-4cd0-86c2-5f66ec065983
	I1212 22:23:03.662034  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:03.662041  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:03.662049  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:03.662056  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:03.662066  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:03 GMT
	I1212 22:23:03.662172  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:04.159844  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:04.159865  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:04.159873  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:04.159879  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:04.162032  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:04.162057  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:04.162065  101931 round_trippers.go:580]     Audit-Id: 2dfe03fd-631b-45eb-be48-14518a7809ac
	I1212 22:23:04.162073  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:04.162081  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:04.162090  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:04.162100  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:04.162108  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:04 GMT
	I1212 22:23:04.162257  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:04.659878  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:04.659900  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:04.659908  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:04.659914  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:04.662164  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:04.662191  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:04.662201  101931 round_trippers.go:580]     Audit-Id: bc244bfa-abb8-4d35-b1a3-2de9fa7b4e8e
	I1212 22:23:04.662210  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:04.662219  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:04.662231  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:04.662241  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:04.662253  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:04 GMT
	I1212 22:23:04.662378  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:04.662710  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:05.159895  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:05.159914  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:05.159922  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:05.159928  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:05.162025  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:05.162049  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:05.162058  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:05.162064  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:05.162069  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:05.162074  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:05.162080  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:05 GMT
	I1212 22:23:05.162086  101931 round_trippers.go:580]     Audit-Id: b5489f7a-8957-4e62-b45c-98c3228afd11
	I1212 22:23:05.162229  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:05.659841  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:05.659862  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:05.659870  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:05.659876  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:05.662016  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:05.662035  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:05.662045  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:05.662054  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:05.662062  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:05 GMT
	I1212 22:23:05.662070  101931 round_trippers.go:580]     Audit-Id: f5b532a2-02d0-4685-b15b-946b8e4a30b8
	I1212 22:23:05.662079  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:05.662091  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:05.662214  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:06.159819  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:06.159853  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:06.159862  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:06.159868  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:06.162107  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:06.162130  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:06.162139  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:06.162147  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:06.162154  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:06.162161  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:06 GMT
	I1212 22:23:06.162168  101931 round_trippers.go:580]     Audit-Id: d141bf2a-1b53-4103-9c76-8a13c2341a8a
	I1212 22:23:06.162176  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:06.162356  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:06.660175  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:06.660201  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:06.660209  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:06.660215  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:06.662439  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:06.662461  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:06.662470  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:06.662478  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:06.662485  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:06 GMT
	I1212 22:23:06.662493  101931 round_trippers.go:580]     Audit-Id: 79c308d5-2ffe-450a-a400-d7cd710d3317
	I1212 22:23:06.662504  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:06.662517  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:06.662624  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:06.662951  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:07.159758  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:07.159778  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:07.159786  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:07.159792  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:07.161964  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:07.161992  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:07.162003  101931 round_trippers.go:580]     Audit-Id: ce42f951-a989-4a7e-9f83-3a25d044fad2
	I1212 22:23:07.162011  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:07.162018  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:07.162031  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:07.162042  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:07.162053  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:07 GMT
	I1212 22:23:07.162202  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:07.659805  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:07.659828  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:07.659836  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:07.659842  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:07.662141  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:07.662214  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:07.662234  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:07.662251  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:07 GMT
	I1212 22:23:07.662267  101931 round_trippers.go:580]     Audit-Id: 131ca802-1e95-4c98-8d37-e2d305232bcd
	I1212 22:23:07.662294  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:07.662311  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:07.662365  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:07.664270  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:08.159951  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:08.159973  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:08.159981  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:08.159987  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:08.162198  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:08.162220  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:08.162226  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:08 GMT
	I1212 22:23:08.162234  101931 round_trippers.go:580]     Audit-Id: e5e60d61-47ff-448b-956d-0a1d7129eb8d
	I1212 22:23:08.162239  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:08.162244  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:08.162249  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:08.162254  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:08.162388  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:08.660027  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:08.660048  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:08.660056  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:08.660062  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:08.662183  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:08.662208  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:08.662216  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:08 GMT
	I1212 22:23:08.662221  101931 round_trippers.go:580]     Audit-Id: d7c6aeb1-c3b2-4bb9-9bee-ee2fd77fba62
	I1212 22:23:08.662228  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:08.662236  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:08.662248  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:08.662263  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:08.662421  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:09.159870  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:09.159893  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:09.159901  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:09.159907  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:09.161974  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:09.161992  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:09.161999  101931 round_trippers.go:580]     Audit-Id: 4b0c6db6-c227-4d79-b4c7-18213b6e4d27
	I1212 22:23:09.162005  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:09.162010  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:09.162015  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:09.162022  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:09.162027  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:09 GMT
	I1212 22:23:09.162150  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:09.162451  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:09.659848  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:09.659869  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:09.659880  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:09.659887  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:09.662106  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:09.662125  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:09.662131  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:09.662140  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:09.662148  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:09 GMT
	I1212 22:23:09.662156  101931 round_trippers.go:580]     Audit-Id: 5c493843-0005-4d06-87f1-39baee4f1f83
	I1212 22:23:09.662164  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:09.662171  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:09.662286  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:10.159980  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:10.160001  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:10.160009  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:10.160015  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:10.162149  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:10.162167  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:10.162173  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:10.162178  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:10.162185  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:10 GMT
	I1212 22:23:10.162193  101931 round_trippers.go:580]     Audit-Id: b8049055-7643-47d1-937e-f6288fc4622e
	I1212 22:23:10.162201  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:10.162209  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:10.162325  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:10.659946  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:10.659969  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:10.659977  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:10.659987  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:10.662074  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:10.662098  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:10.662108  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:10 GMT
	I1212 22:23:10.662117  101931 round_trippers.go:580]     Audit-Id: d373879c-2dcc-4949-b20d-c8b3827f99d5
	I1212 22:23:10.662125  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:10.662133  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:10.662140  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:10.662147  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:10.662256  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:11.159910  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:11.159929  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:11.159937  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:11.159943  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:11.162158  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:11.162178  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:11.162188  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:11.162196  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:11.162205  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:11 GMT
	I1212 22:23:11.162214  101931 round_trippers.go:580]     Audit-Id: dc12d075-c3f5-4e45-bc10-6a0b647be75d
	I1212 22:23:11.162226  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:11.162233  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:11.162452  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:11.162740  101931 node_ready.go:58] node "multinode-764961-m02" has status "Ready":"False"
	I1212 22:23:11.660100  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:11.660120  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:11.660132  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:11.660138  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:11.662416  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:11.662435  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:11.662441  101931 round_trippers.go:580]     Audit-Id: 3e0abd70-e23b-4974-b81d-96f2046f98d5
	I1212 22:23:11.662457  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:11.662465  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:11.662472  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:11.662480  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:11.662492  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:11 GMT
	I1212 22:23:11.662606  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"469","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 6031 chars]
	I1212 22:23:12.160207  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:12.160230  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.160238  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.160245  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.162311  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:12.162329  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.162335  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.162341  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.162346  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.162352  101931 round_trippers.go:580]     Audit-Id: 0bcf080f-7572-40cc-ba1a-9a55b6c8267b
	I1212 22:23:12.162360  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.162369  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.162521  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"492","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1212 22:23:12.162806  101931 node_ready.go:49] node "multinode-764961-m02" has status "Ready":"True"
	I1212 22:23:12.162820  101931 node_ready.go:38] duration metric: took 31.509371719s waiting for node "multinode-764961-m02" to be "Ready" ...
	I1212 22:23:12.162829  101931 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:23:12.162885  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1212 22:23:12.162895  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.162902  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.162908  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.166121  101931 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 22:23:12.166148  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.166159  101931 round_trippers.go:580]     Audit-Id: 77d14549-3a59-48ee-99e5-65b66d3f5f4b
	I1212 22:23:12.166168  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.166175  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.166186  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.166193  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.166207  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.166696  101931 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"403","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1212 22:23:12.168676  101931 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.168744  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-b6lvq
	I1212 22:23:12.168753  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.168760  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.168766  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.170426  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.170442  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.170449  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.170457  101931 round_trippers.go:580]     Audit-Id: 7f02958a-0ee3-45ac-97d4-b8e94fcb2828
	I1212 22:23:12.170469  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.170482  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.170494  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.170506  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.170600  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-b6lvq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b130b370-8465-4b9a-973d-8ff1bb6df10a","resourceVersion":"403","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d89589d6-59eb-4c5e-9095-44f71c25c306","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d89589d6-59eb-4c5e-9095-44f71c25c306\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1212 22:23:12.171024  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.171037  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.171044  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.171049  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.172729  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.172749  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.172759  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.172768  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.172777  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.172788  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.172797  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.172805  101931 round_trippers.go:580]     Audit-Id: 87955e45-1d66-495b-967b-c07390cb6379
	I1212 22:23:12.172926  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:12.173222  101931 pod_ready.go:92] pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.173238  101931 pod_ready.go:81] duration metric: took 4.541049ms waiting for pod "coredns-5dd5756b68-b6lvq" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.173246  101931 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.173284  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-764961
	I1212 22:23:12.173292  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.173298  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.173304  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.174864  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.174879  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.174885  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.174894  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.174902  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.174918  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.174926  101931 round_trippers.go:580]     Audit-Id: 5ab9a75a-059c-4e02-8a8a-47a8f3495773
	I1212 22:23:12.174938  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.175040  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-764961","namespace":"kube-system","uid":"5295004b-e5f0-4870-9c31-a49e4912eb6b","resourceVersion":"260","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"b159538976e895ac2cf46c4cbb67dcbf","kubernetes.io/config.mirror":"b159538976e895ac2cf46c4cbb67dcbf","kubernetes.io/config.seen":"2023-12-12T22:21:37.367840499Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1212 22:23:12.175350  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.175363  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.175369  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.175375  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.176818  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.176833  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.176839  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.176844  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.176854  101931 round_trippers.go:580]     Audit-Id: 055c13d0-ab9d-45f3-8b49-7fe29a7fb382
	I1212 22:23:12.176864  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.176875  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.176884  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.177026  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:12.177302  101931 pod_ready.go:92] pod "etcd-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.177317  101931 pod_ready.go:81] duration metric: took 4.065993ms waiting for pod "etcd-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.177329  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.177384  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-764961
	I1212 22:23:12.177394  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.177404  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.177414  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.179158  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.179173  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.179181  101931 round_trippers.go:580]     Audit-Id: c7b088d2-e8a1-4533-83e3-218fc168b045
	I1212 22:23:12.179190  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.179197  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.179208  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.179214  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.179222  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.179334  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-764961","namespace":"kube-system","uid":"9570752b-45ee-405d-a6e2-fc0b9aa28c7b","resourceVersion":"295","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e2d2f9495e644af7e6228f8e856d9854","kubernetes.io/config.mirror":"e2d2f9495e644af7e6228f8e856d9854","kubernetes.io/config.seen":"2023-12-12T22:21:37.367844265Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1212 22:23:12.179804  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.179821  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.179833  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.179842  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.181301  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.181317  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.181326  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.181335  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.181345  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.181357  101931 round_trippers.go:580]     Audit-Id: b4ed1caf-262f-4f9f-b724-bccd9623ce70
	I1212 22:23:12.181363  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.181369  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.181457  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:12.181792  101931 pod_ready.go:92] pod "kube-apiserver-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.181807  101931 pod_ready.go:81] duration metric: took 4.467077ms waiting for pod "kube-apiserver-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.181820  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.181878  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-764961
	I1212 22:23:12.181888  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.181899  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.181911  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.183376  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.183390  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.183397  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.183402  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.183407  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.183412  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.183417  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.183423  101931 round_trippers.go:580]     Audit-Id: fd3f5568-3fb3-4951-abe6-fedceffd6a68
	I1212 22:23:12.183610  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-764961","namespace":"kube-system","uid":"01087a6d-6662-4b6c-8793-8a9da414ac2e","resourceVersion":"261","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1359d7bb8db476a8baa267a12bd5c655","kubernetes.io/config.mirror":"1359d7bb8db476a8baa267a12bd5c655","kubernetes.io/config.seen":"2023-12-12T22:21:31.876138175Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1212 22:23:12.184059  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.184074  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.184084  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.184098  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.185438  101931 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 22:23:12.185456  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.185465  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.185473  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.185488  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.185496  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.185510  101931 round_trippers.go:580]     Audit-Id: 02c05f4a-a43c-4b42-90ad-38963f4e44c7
	I1212 22:23:12.185517  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.185600  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:12.185825  101931 pod_ready.go:92] pod "kube-controller-manager-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.185837  101931 pod_ready.go:81] duration metric: took 4.007484ms waiting for pod "kube-controller-manager-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.185847  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8p7w9" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.361213  101931 request.go:629] Waited for 175.319123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p7w9
	I1212 22:23:12.361278  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p7w9
	I1212 22:23:12.361283  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.361291  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.361304  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.363412  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:12.363433  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.363442  101931 round_trippers.go:580]     Audit-Id: 56f9e8b3-1716-4f8c-82e2-d0385bab0b3f
	I1212 22:23:12.363458  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.363466  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.363474  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.363486  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.363496  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.363612  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8p7w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9c44c8ae-7d36-4e55-ae80-2e80629bb167","resourceVersion":"460","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae66b779-df4d-4acd-be39-7df3a52caef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae66b779-df4d-4acd-be39-7df3a52caef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1212 22:23:12.560266  101931 request.go:629] Waited for 196.27018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:12.560316  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961-m02
	I1212 22:23:12.560321  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.560338  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.560350  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.562511  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:12.562536  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.562543  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.562548  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.562553  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.562559  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.562566  101931 round_trippers.go:580]     Audit-Id: dd790acf-e799-43e7-84a0-de2a97dd8f57
	I1212 22:23:12.562574  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.562730  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961-m02","uid":"49547ad4-5305-4242-a34b-9b985a8abede","resourceVersion":"492","creationTimestamp":"2023-12-12T22:22:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_12T22_22_40_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:22:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5848 chars]
	I1212 22:23:12.563043  101931 pod_ready.go:92] pod "kube-proxy-8p7w9" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.563059  101931 pod_ready.go:81] duration metric: took 377.206486ms waiting for pod "kube-proxy-8p7w9" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.563069  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smjqf" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.760424  101931 request.go:629] Waited for 197.277567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smjqf
	I1212 22:23:12.760479  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-smjqf
	I1212 22:23:12.760484  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.760500  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.760511  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.762631  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:12.762649  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.762656  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.762661  101931 round_trippers.go:580]     Audit-Id: e6fdda7a-d694-46c6-b1f4-dc3e72eeff7d
	I1212 22:23:12.762666  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.762671  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.762677  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.762683  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.762847  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-smjqf","generateName":"kube-proxy-","namespace":"kube-system","uid":"00b947bc-a444-4666-a553-2d8a2c47b671","resourceVersion":"369","creationTimestamp":"2023-12-12T22:21:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ae66b779-df4d-4acd-be39-7df3a52caef1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae66b779-df4d-4acd-be39-7df3a52caef1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1212 22:23:12.960522  101931 request.go:629] Waited for 197.298301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.960591  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:12.960596  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:12.960604  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:12.960611  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:12.962875  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:12.962892  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:12.962899  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:12.962904  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:12.962910  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:12.962915  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:12 GMT
	I1212 22:23:12.962920  101931 round_trippers.go:580]     Audit-Id: 050f8427-ef1f-4429-8a44-2c69c3b6b96e
	I1212 22:23:12.962925  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:12.963034  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:12.963315  101931 pod_ready.go:92] pod "kube-proxy-smjqf" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:12.963329  101931 pod_ready.go:81] duration metric: took 400.255002ms waiting for pod "kube-proxy-smjqf" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:12.963338  101931 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:13.160746  101931 request.go:629] Waited for 197.339936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-764961
	I1212 22:23:13.160800  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-764961
	I1212 22:23:13.160805  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:13.160813  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:13.160821  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:13.163158  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:13.163191  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:13.163201  101931 round_trippers.go:580]     Audit-Id: ed3c9520-82ae-4bb6-85bf-b5d3c4c5c593
	I1212 22:23:13.163209  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:13.163217  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:13.163228  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:13.163237  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:13.163246  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:13 GMT
	I1212 22:23:13.163420  101931 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-764961","namespace":"kube-system","uid":"7f50f24e-9282-404b-9242-703201ac2c66","resourceVersion":"259","creationTimestamp":"2023-12-12T22:21:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0b8d03fbc8eff19771ab68a013adbf93","kubernetes.io/config.mirror":"0b8d03fbc8eff19771ab68a013adbf93","kubernetes.io/config.seen":"2023-12-12T22:21:37.367847050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T22:21:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1212 22:23:13.361167  101931 request.go:629] Waited for 197.340779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:13.361220  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-764961
	I1212 22:23:13.361225  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:13.361232  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:13.361239  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:13.363379  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:13.363398  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:13.363408  101931 round_trippers.go:580]     Audit-Id: 38fb7fbb-9fde-4b98-ac34-b7babbf7ffc2
	I1212 22:23:13.363415  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:13.363423  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:13.363430  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:13.363438  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:13.363450  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:13 GMT
	I1212 22:23:13.363585  101931 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-12T22:21:34Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1212 22:23:13.363899  101931 pod_ready.go:92] pod "kube-scheduler-multinode-764961" in "kube-system" namespace has status "Ready":"True"
	I1212 22:23:13.363916  101931 pod_ready.go:81] duration metric: took 400.571118ms waiting for pod "kube-scheduler-multinode-764961" in "kube-system" namespace to be "Ready" ...
	I1212 22:23:13.363929  101931 pod_ready.go:38] duration metric: took 1.201086217s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 22:23:13.363954  101931 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 22:23:13.364015  101931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:23:13.374419  101931 system_svc.go:56] duration metric: took 10.461008ms WaitForService to wait for kubelet.
	I1212 22:23:13.374444  101931 kubeadm.go:581] duration metric: took 32.736433192s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 22:23:13.374470  101931 node_conditions.go:102] verifying NodePressure condition ...
	I1212 22:23:13.560862  101931 request.go:629] Waited for 186.330423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1212 22:23:13.560945  101931 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1212 22:23:13.560957  101931 round_trippers.go:469] Request Headers:
	I1212 22:23:13.560970  101931 round_trippers.go:473]     Accept: application/json, */*
	I1212 22:23:13.560983  101931 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 22:23:13.563540  101931 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 22:23:13.563577  101931 round_trippers.go:577] Response Headers:
	I1212 22:23:13.563586  101931 round_trippers.go:580]     Content-Type: application/json
	I1212 22:23:13.563594  101931 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bad35084-a891-4701-b52f-b80ae13167d1
	I1212 22:23:13.563602  101931 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41293918-e891-44c8-a5f7-c79fe96461ea
	I1212 22:23:13.563610  101931 round_trippers.go:580]     Date: Tue, 12 Dec 2023 22:23:13 GMT
	I1212 22:23:13.563620  101931 round_trippers.go:580]     Audit-Id: 2eb04f72-d354-4c87-a459-1ac584da6794
	I1212 22:23:13.563628  101931 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 22:23:13.563909  101931 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-764961","uid":"081c8671-275e-49b5-9fc5-88c32fff17de","resourceVersion":"384","creationTimestamp":"2023-12-12T22:21:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-764961","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b3e481dbceb877ce85ff888adf9de756f54684f","minikube.k8s.io/name":"multinode-764961","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T22_21_38_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12840 chars]
	I1212 22:23:13.564567  101931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:23:13.564593  101931 node_conditions.go:123] node cpu capacity is 8
	I1212 22:23:13.564606  101931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1212 22:23:13.564614  101931 node_conditions.go:123] node cpu capacity is 8
	I1212 22:23:13.564622  101931 node_conditions.go:105] duration metric: took 190.144124ms to run NodePressure ...
	I1212 22:23:13.564635  101931 start.go:228] waiting for startup goroutines ...
	I1212 22:23:13.564668  101931 start.go:242] writing updated cluster config ...
	I1212 22:23:13.565007  101931 ssh_runner.go:195] Run: rm -f paused
	I1212 22:23:13.610022  101931 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1212 22:23:13.612760  101931 out.go:177] * Done! kubectl is now configured to use "multinode-764961" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 12 22:22:23 multinode-764961 crio[957]: time="2023-12-12 22:22:23.191788418Z" level=info msg="Starting container: 5fb3d0c753ce52dbbce6c6e6664d7f90d540d7b7bd775a79d110ecee60461cea" id=a89f7192-a01f-4b20-8032-00a2d11efeb0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 22:22:23 multinode-764961 crio[957]: time="2023-12-12 22:22:23.197282634Z" level=info msg="Created container 5fce6f72657676b00e5f3e548ea4a558e9619438bb98b51b143a1fa73848e4e7: kube-system/coredns-5dd5756b68-b6lvq/coredns" id=68f93c7f-6a95-48f8-8210-ddd0a4049927 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 22:22:23 multinode-764961 crio[957]: time="2023-12-12 22:22:23.197856919Z" level=info msg="Starting container: 5fce6f72657676b00e5f3e548ea4a558e9619438bb98b51b143a1fa73848e4e7" id=d4da25ef-e001-40de-a15b-dca7829042eb name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 22:22:23 multinode-764961 crio[957]: time="2023-12-12 22:22:23.198178358Z" level=info msg="Started container" PID=2358 containerID=5fb3d0c753ce52dbbce6c6e6664d7f90d540d7b7bd775a79d110ecee60461cea description=kube-system/storage-provisioner/storage-provisioner id=a89f7192-a01f-4b20-8032-00a2d11efeb0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a62321ae5dfcea2e30ef4da8aad4f1c1155a232479bd6359433024ab332ad30f
	Dec 12 22:22:23 multinode-764961 crio[957]: time="2023-12-12 22:22:23.204433984Z" level=info msg="Started container" PID=2368 containerID=5fce6f72657676b00e5f3e548ea4a558e9619438bb98b51b143a1fa73848e4e7 description=kube-system/coredns-5dd5756b68-b6lvq/coredns id=d4da25ef-e001-40de-a15b-dca7829042eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=f45a49cd0fc48166d0c605dcf7670870a94697dfd0f1c1e34419dd42a0e63ad5
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.603137364Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-bbwmj/POD" id=ad5578ed-4bc0-4f11-a8dd-cd360f65222f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.603199351Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.616162308Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-bbwmj Namespace:default ID:648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57 UID:2f3d30a6-ac6a-474d-918d-dab3da829a79 NetNS:/var/run/netns/6ab839c3-1867-4b61-8d57-8618c00402f6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.616191482Z" level=info msg="Adding pod default_busybox-5bc68d56bd-bbwmj to CNI network \"kindnet\" (type=ptp)"
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.624899068Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-bbwmj Namespace:default ID:648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57 UID:2f3d30a6-ac6a-474d-918d-dab3da829a79 NetNS:/var/run/netns/6ab839c3-1867-4b61-8d57-8618c00402f6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.625009715Z" level=info msg="Checking pod default_busybox-5bc68d56bd-bbwmj for CNI network kindnet (type=ptp)"
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.650688972Z" level=info msg="Ran pod sandbox 648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57 with infra container: default/busybox-5bc68d56bd-bbwmj/POD" id=ad5578ed-4bc0-4f11-a8dd-cd360f65222f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.651805922Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=51d38032-1953-4e00-8d6c-68d3a8af11dd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.652061439Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=51d38032-1953-4e00-8d6c-68d3a8af11dd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.652859611Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=3be6f04f-10b5-4b36-aac5-1d6697ab9339 name=/runtime.v1.ImageService/PullImage
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.656188837Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 12 22:23:14 multinode-764961 crio[957]: time="2023-12-12 22:23:14.828950164Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.256877365Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=3be6f04f-10b5-4b36-aac5-1d6697ab9339 name=/runtime.v1.ImageService/PullImage
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.257857368Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2ecd9b3a-c619-474e-b236-6e2932d494e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.258465203Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2ecd9b3a-c619-474e-b236-6e2932d494e2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.259252283Z" level=info msg="Creating container: default/busybox-5bc68d56bd-bbwmj/busybox" id=8d9720e7-7837-40a5-b14d-be22a0b362d6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.259353705Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.333889007Z" level=info msg="Created container b864e14b5682c897c82269607b0eaa0e041c75e8c07829fdf4c2f4707f437492: default/busybox-5bc68d56bd-bbwmj/busybox" id=8d9720e7-7837-40a5-b14d-be22a0b362d6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.334424257Z" level=info msg="Starting container: b864e14b5682c897c82269607b0eaa0e041c75e8c07829fdf4c2f4707f437492" id=de16a237-1a3f-4292-9243-78f478aff423 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 22:23:15 multinode-764961 crio[957]: time="2023-12-12 22:23:15.340386892Z" level=info msg="Started container" PID=2548 containerID=b864e14b5682c897c82269607b0eaa0e041c75e8c07829fdf4c2f4707f437492 description=default/busybox-5bc68d56bd-bbwmj/busybox id=de16a237-1a3f-4292-9243-78f478aff423 name=/runtime.v1.RuntimeService/StartContainer sandboxID=648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b864e14b5682c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   648e9c0281c20       busybox-5bc68d56bd-bbwmj
	5fce6f7265767       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      56 seconds ago       Running             coredns                   0                   f45a49cd0fc48       coredns-5dd5756b68-b6lvq
	5fb3d0c753ce5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       0                   a62321ae5dfce       storage-provisioner
	50cc11a8621ec       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   40b2602cff58d       kindnet-5fp6n
	711d427be4c05       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   72a5b7a40e7b2       kube-proxy-smjqf
	1e76449622e2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   24be3ebd286e9       etcd-multinode-764961
	d7794751baba8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   b866450477b05       kube-apiserver-multinode-764961
	74fc9f7bf08ef       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   605bd9ad8fde2       kube-scheduler-multinode-764961
	ab3e5f321f93a       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   00a035665cde5       kube-controller-manager-multinode-764961
	
	* 
	* ==> coredns [5fce6f72657676b00e5f3e548ea4a558e9619438bb98b51b143a1fa73848e4e7] <==
	* [INFO] 10.244.1.2:38431 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001146s
	[INFO] 10.244.0.3:38295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127915s
	[INFO] 10.244.0.3:37106 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001574036s
	[INFO] 10.244.0.3:38671 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071131s
	[INFO] 10.244.0.3:35444 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067494s
	[INFO] 10.244.0.3:59906 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116369s
	[INFO] 10.244.0.3:60101 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000056292s
	[INFO] 10.244.0.3:58358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056048s
	[INFO] 10.244.0.3:57438 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053386s
	[INFO] 10.244.1.2:60652 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117695s
	[INFO] 10.244.1.2:57711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091113s
	[INFO] 10.244.1.2:41093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067577s
	[INFO] 10.244.1.2:46890 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071858s
	[INFO] 10.244.0.3:40974 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137792s
	[INFO] 10.244.0.3:57254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076042s
	[INFO] 10.244.0.3:45768 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070364s
	[INFO] 10.244.0.3:47232 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051402s
	[INFO] 10.244.1.2:37554 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112945s
	[INFO] 10.244.1.2:39563 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000137631s
	[INFO] 10.244.1.2:47091 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000106106s
	[INFO] 10.244.1.2:54994 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000090003s
	[INFO] 10.244.0.3:39903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103866s
	[INFO] 10.244.0.3:60710 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068869s
	[INFO] 10.244.0.3:58079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00006975s
	[INFO] 10.244.0.3:55910 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005291s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-764961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-764961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-764961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T22_21_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:21:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-764961
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:23:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:22:22 +0000   Tue, 12 Dec 2023 22:21:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:22:22 +0000   Tue, 12 Dec 2023 22:21:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:22:22 +0000   Tue, 12 Dec 2023 22:21:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:22:22 +0000   Tue, 12 Dec 2023 22:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-764961
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 6eb58d6f517645a283ec7798ac49248b
	  System UUID:                1f965ae9-72d1-4e81-b632-af2673f864a9
	  Boot ID:                    e32ab69d-45ad-4e0a-b786-ce498c8395cb
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bbwmj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-b6lvq                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     89s
	  kube-system                 etcd-multinode-764961                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kindnet-5fp6n                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-764961             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-multinode-764961    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-smjqf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-764961             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 88s   kube-proxy       
	  Normal  Starting                 103s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s  kubelet          Node multinode-764961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s  kubelet          Node multinode-764961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s  kubelet          Node multinode-764961 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s   node-controller  Node multinode-764961 event: Registered Node multinode-764961 in Controller
	  Normal  NodeReady                58s   kubelet          Node multinode-764961 status is now: NodeReady
	
	
	Name:               multinode-764961-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-764961-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b3e481dbceb877ce85ff888adf9de756f54684f
	                    minikube.k8s.io/name=multinode-764961
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_12T22_22_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 22:22:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-764961-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 22:23:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 22:23:11 +0000   Tue, 12 Dec 2023 22:22:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 22:23:11 +0000   Tue, 12 Dec 2023 22:22:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 22:23:11 +0000   Tue, 12 Dec 2023 22:22:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 22:23:11 +0000   Tue, 12 Dec 2023 22:23:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-764961-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859424Ki
	  pods:               110
	System Info:
	  Machine ID:                 3edbfe50b17d4515a28f3e6783dc8c97
	  System UUID:                3db17490-fc1e-44c4-ac62-279ea85192ad
	  Boot ID:                    e32ab69d-45ad-4e0a-b786-ce498c8395cb
	  Kernel Version:             5.15.0-1047-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-67rxw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-bftp6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-proxy-8p7w9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  RegisteredNode           40s                node-controller  Node multinode-764961-m02 event: Registered Node multinode-764961-m02 in Controller
	  Normal  NodeHasSufficientMemory  40s (x5 over 42s)  kubelet          Node multinode-764961-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x5 over 42s)  kubelet          Node multinode-764961-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x5 over 42s)  kubelet          Node multinode-764961-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-764961-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004932] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006583] FS-Cache: N-cookie d=000000005393a62b{9p.inode} n=00000000a5f612b8
	[  +0.008742] FS-Cache: N-key=[8] '89a00f0200000000'
	[  +0.280632] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006754] FS-Cache: O-cookie d=000000005393a62b{9p.inode} n=00000000462ec489
	[  +0.007361] FS-Cache: O-key=[8] '99a00f0200000000'
	[  +0.004933] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006595] FS-Cache: N-cookie d=000000005393a62b{9p.inode} n=000000004d1e5a71
	[  +0.008728] FS-Cache: N-key=[8] '99a00f0200000000'
	[Dec12 22:12] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 22:13] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +1.020215] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +2.015789] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[Dec12 22:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[  +8.191161] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[ +16.130321] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	[Dec12 22:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: e6 84 25 42 79 31 82 56 85 78 5b 08 08 00
	
	* 
	* ==> etcd [1e76449622e2c831042ba2e8723ad2bf9a6997a0c58fe11c4cd57222bc2b59ff] <==
	* {"level":"info","ts":"2023-12-12T22:21:32.629217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-12T22:21:32.631501Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-12T22:21:32.631901Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T22:21:32.632017Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-12T22:21:32.632078Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-12T22:21:32.632125Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T22:21:32.632156Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T22:21:32.819795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T22:21:32.819847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T22:21:32.819879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-12T22:21:32.819898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T22:21:32.819907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-12T22:21:32.81992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T22:21:32.819941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-12T22:21:32.82096Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:21:32.821689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:21:32.821687Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-764961 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T22:21:32.821757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T22:21:32.821961Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T22:21:32.822033Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T22:21:32.822072Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:21:32.822151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:21:32.82218Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T22:21:32.822947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-12T22:21:32.822987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:23:20 up  1:05,  0 users,  load average: 0.62, 1.13, 0.97
	Linux multinode-764961 5.15.0-1047-gcp #55~20.04.1-Ubuntu SMP Wed Nov 15 11:38:25 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [50cc11a8621eceee76901e2d9269d2450f4022c4b05a3a9a9a525006882fbd54] <==
	* I1212 22:21:52.016873       1 main.go:116] setting mtu 1500 for CNI 
	I1212 22:21:52.016896       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 22:21:52.016923       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 22:22:22.245363       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1212 22:22:22.254880       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:22:22.254912       1 main.go:227] handling current node
	I1212 22:22:32.270279       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:22:32.270315       1 main.go:227] handling current node
	I1212 22:22:42.282716       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:22:42.282743       1 main.go:227] handling current node
	I1212 22:22:42.282752       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 22:22:42.282756       1 main.go:250] Node multinode-764961-m02 has CIDR [10.244.1.0/24] 
	I1212 22:22:42.282897       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1212 22:22:52.295425       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:22:52.295455       1 main.go:227] handling current node
	I1212 22:22:52.295468       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 22:22:52.295475       1 main.go:250] Node multinode-764961-m02 has CIDR [10.244.1.0/24] 
	I1212 22:23:02.308596       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:23:02.308619       1 main.go:227] handling current node
	I1212 22:23:02.308628       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 22:23:02.308634       1 main.go:250] Node multinode-764961-m02 has CIDR [10.244.1.0/24] 
	I1212 22:23:12.313124       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1212 22:23:12.313147       1 main.go:227] handling current node
	I1212 22:23:12.313156       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1212 22:23:12.313161       1 main.go:250] Node multinode-764961-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [d7794751baba85bed959af44653060222b7cf4955144b17ebcfd123f0fd5a2bc] <==
	* I1212 22:21:34.735186       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 22:21:34.735410       1 aggregator.go:166] initial CRD sync complete...
	I1212 22:21:34.735467       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 22:21:34.735499       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 22:21:34.735531       1 cache.go:39] Caches are synced for autoregister controller
	I1212 22:21:34.736926       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 22:21:34.815940       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1212 22:21:34.821456       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1212 22:21:34.821618       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1212 22:21:35.026137       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 22:21:35.589849       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 22:21:35.593340       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 22:21:35.593385       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 22:21:35.952025       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 22:21:35.981073       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 22:21:36.020856       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 22:21:36.025710       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1212 22:21:36.026475       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 22:21:36.030029       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 22:21:36.734965       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 22:21:37.317525       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 22:21:37.326472       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 22:21:37.334467       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 22:21:51.031685       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1212 22:21:51.133598       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [ab3e5f321f93a20a49a3b94b2144c47365d14e25763dee0c3b9b3380ad5004d6] <==
	* I1212 22:22:22.791838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.873µs"
	I1212 22:22:22.807014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.182µs"
	I1212 22:22:23.584715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.627µs"
	I1212 22:22:23.615692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.90438ms"
	I1212 22:22:23.615793       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.744µs"
	I1212 22:22:25.211509       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 22:22:40.058307       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-764961-m02\" does not exist"
	I1212 22:22:40.064639       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-764961-m02" podCIDRs=["10.244.1.0/24"]
	I1212 22:22:40.067718       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8p7w9"
	I1212 22:22:40.069899       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bftp6"
	I1212 22:22:40.214398       1 event.go:307] "Event occurred" object="multinode-764961-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-764961-m02 event: Registered Node multinode-764961-m02 in Controller"
	I1212 22:22:40.214418       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-764961-m02"
	I1212 22:23:11.825010       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-764961-m02"
	I1212 22:23:14.282401       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1212 22:23:14.289272       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67rxw"
	I1212 22:23:14.295128       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-bbwmj"
	I1212 22:23:14.300805       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.687709ms"
	I1212 22:23:14.311310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.444467ms"
	I1212 22:23:14.311394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.392µs"
	I1212 22:23:14.311444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.111µs"
	I1212 22:23:15.229832       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-67rxw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-67rxw"
	I1212 22:23:15.690036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.429051ms"
	I1212 22:23:15.690115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.059µs"
	I1212 22:23:16.648904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.880612ms"
	I1212 22:23:16.648992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.827µs"
	
	* 
	* ==> kube-proxy [711d427be4c051d9ec88b114bedf9bebf5d91cfc2f488b98a392e324568db4d7] <==
	* I1212 22:21:51.957876       1 server_others.go:69] "Using iptables proxy"
	I1212 22:21:51.965655       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1212 22:21:52.118299       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 22:21:52.120184       1 server_others.go:152] "Using iptables Proxier"
	I1212 22:21:52.120224       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1212 22:21:52.120234       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1212 22:21:52.120263       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 22:21:52.120468       1 server.go:846] "Version info" version="v1.28.4"
	I1212 22:21:52.120480       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 22:21:52.121226       1 config.go:97] "Starting endpoint slice config controller"
	I1212 22:21:52.121758       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 22:21:52.121211       1 config.go:315] "Starting node config controller"
	I1212 22:21:52.121961       1 config.go:188] "Starting service config controller"
	I1212 22:21:52.122040       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 22:21:52.121981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 22:21:52.222665       1 shared_informer.go:318] Caches are synced for node config
	I1212 22:21:52.222680       1 shared_informer.go:318] Caches are synced for service config
	I1212 22:21:52.222698       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [74fc9f7bf08ef6426060622eff5037ffc2530d47270ccda273ab7ed4917f6cc3] <==
	* W1212 22:21:34.831180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 22:21:34.831199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:34.831214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 22:21:34.831222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:34.831285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:34.831303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:34.831366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 22:21:34.831443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 22:21:34.831542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 22:21:34.831584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 22:21:34.831724       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 22:21:34.831795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 22:21:34.831840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:34.831865       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:34.832597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 22:21:34.832626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 22:21:34.832604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:34.832658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:35.638016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:35.638063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:35.727426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 22:21:35.727453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 22:21:35.811165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 22:21:35.811191       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1212 22:21:36.223025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217060    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00b947bc-a444-4666-a553-2d8a2c47b671-kube-proxy\") pod \"kube-proxy-smjqf\" (UID: \"00b947bc-a444-4666-a553-2d8a2c47b671\") " pod="kube-system/kube-proxy-smjqf"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217092    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00b947bc-a444-4666-a553-2d8a2c47b671-lib-modules\") pod \"kube-proxy-smjqf\" (UID: \"00b947bc-a444-4666-a553-2d8a2c47b671\") " pod="kube-system/kube-proxy-smjqf"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217123    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87-xtables-lock\") pod \"kindnet-5fp6n\" (UID: \"2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87\") " pod="kube-system/kindnet-5fp6n"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217151    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00b947bc-a444-4666-a553-2d8a2c47b671-xtables-lock\") pod \"kube-proxy-smjqf\" (UID: \"00b947bc-a444-4666-a553-2d8a2c47b671\") " pod="kube-system/kube-proxy-smjqf"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217180    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87-cni-cfg\") pod \"kindnet-5fp6n\" (UID: \"2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87\") " pod="kube-system/kindnet-5fp6n"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: I1212 22:21:51.217216    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb7rs\" (UniqueName: \"kubernetes.io/projected/2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87-kube-api-access-fb7rs\") pod \"kindnet-5fp6n\" (UID: \"2aa12cb0-e06a-4dc8-9995-e4fc5b6f6d87\") " pod="kube-system/kindnet-5fp6n"
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: W1212 22:21:51.516823    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio-40b2602cff58d1602422271f85a0c1c84fc4e82f6e01c8311c8bf85882589e5f WatchSource:0}: Error finding container 40b2602cff58d1602422271f85a0c1c84fc4e82f6e01c8311c8bf85882589e5f: Status 404 returned error can't find the container with id 40b2602cff58d1602422271f85a0c1c84fc4e82f6e01c8311c8bf85882589e5f
	Dec 12 22:21:51 multinode-764961 kubelet[1592]: W1212 22:21:51.517771    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio-72a5b7a40e7b20785f3d0a998f53c88d3cbdb22ef0eed6f41075f85cce7a4e48 WatchSource:0}: Error finding container 72a5b7a40e7b20785f3d0a998f53c88d3cbdb22ef0eed6f41075f85cce7a4e48: Status 404 returned error can't find the container with id 72a5b7a40e7b20785f3d0a998f53c88d3cbdb22ef0eed6f41075f85cce7a4e48
	Dec 12 22:21:52 multinode-764961 kubelet[1592]: I1212 22:21:52.530438    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-smjqf" podStartSLOduration=1.5304013520000002 podCreationTimestamp="2023-12-12 22:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:21:52.530223389 +0000 UTC m=+15.236285563" watchObservedRunningTime="2023-12-12 22:21:52.530401352 +0000 UTC m=+15.236463525"
	Dec 12 22:21:52 multinode-764961 kubelet[1592]: I1212 22:21:52.539041    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5fp6n" podStartSLOduration=1.538983522 podCreationTimestamp="2023-12-12 22:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:21:52.538919984 +0000 UTC m=+15.244982160" watchObservedRunningTime="2023-12-12 22:21:52.538983522 +0000 UTC m=+15.245045694"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.770299    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.790602    1592 topology_manager.go:215] "Topology Admit Handler" podUID="3b49595a-49e0-4c15-b383-68af29aadc8f" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.791867    1592 topology_manager.go:215] "Topology Admit Handler" podUID="b130b370-8465-4b9a-973d-8ff1bb6df10a" podNamespace="kube-system" podName="coredns-5dd5756b68-b6lvq"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.916878    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b49595a-49e0-4c15-b383-68af29aadc8f-tmp\") pod \"storage-provisioner\" (UID: \"3b49595a-49e0-4c15-b383-68af29aadc8f\") " pod="kube-system/storage-provisioner"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.916928    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b130b370-8465-4b9a-973d-8ff1bb6df10a-config-volume\") pod \"coredns-5dd5756b68-b6lvq\" (UID: \"b130b370-8465-4b9a-973d-8ff1bb6df10a\") " pod="kube-system/coredns-5dd5756b68-b6lvq"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.916953    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fwj\" (UniqueName: \"kubernetes.io/projected/b130b370-8465-4b9a-973d-8ff1bb6df10a-kube-api-access-k8fwj\") pod \"coredns-5dd5756b68-b6lvq\" (UID: \"b130b370-8465-4b9a-973d-8ff1bb6df10a\") " pod="kube-system/coredns-5dd5756b68-b6lvq"
	Dec 12 22:22:22 multinode-764961 kubelet[1592]: I1212 22:22:22.917053    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtfq\" (UniqueName: \"kubernetes.io/projected/3b49595a-49e0-4c15-b383-68af29aadc8f-kube-api-access-xwtfq\") pod \"storage-provisioner\" (UID: \"3b49595a-49e0-4c15-b383-68af29aadc8f\") " pod="kube-system/storage-provisioner"
	Dec 12 22:22:23 multinode-764961 kubelet[1592]: W1212 22:22:23.136311    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio-a62321ae5dfcea2e30ef4da8aad4f1c1155a232479bd6359433024ab332ad30f WatchSource:0}: Error finding container a62321ae5dfcea2e30ef4da8aad4f1c1155a232479bd6359433024ab332ad30f: Status 404 returned error can't find the container with id a62321ae5dfcea2e30ef4da8aad4f1c1155a232479bd6359433024ab332ad30f
	Dec 12 22:22:23 multinode-764961 kubelet[1592]: W1212 22:22:23.136530    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio-f45a49cd0fc48166d0c605dcf7670870a94697dfd0f1c1e34419dd42a0e63ad5 WatchSource:0}: Error finding container f45a49cd0fc48166d0c605dcf7670870a94697dfd0f1c1e34419dd42a0e63ad5: Status 404 returned error can't find the container with id f45a49cd0fc48166d0c605dcf7670870a94697dfd0f1c1e34419dd42a0e63ad5
	Dec 12 22:22:23 multinode-764961 kubelet[1592]: I1212 22:22:23.584204    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b6lvq" podStartSLOduration=32.584156177 podCreationTimestamp="2023-12-12 22:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:22:23.58392834 +0000 UTC m=+46.289990514" watchObservedRunningTime="2023-12-12 22:22:23.584156177 +0000 UTC m=+46.290218351"
	Dec 12 22:22:23 multinode-764961 kubelet[1592]: I1212 22:22:23.598583    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.598253113 podCreationTimestamp="2023-12-12 22:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 22:22:23.598134813 +0000 UTC m=+46.304196973" watchObservedRunningTime="2023-12-12 22:22:23.598253113 +0000 UTC m=+46.304315288"
	Dec 12 22:23:14 multinode-764961 kubelet[1592]: I1212 22:23:14.301149    1592 topology_manager.go:215] "Topology Admit Handler" podUID="2f3d30a6-ac6a-474d-918d-dab3da829a79" podNamespace="default" podName="busybox-5bc68d56bd-bbwmj"
	Dec 12 22:23:14 multinode-764961 kubelet[1592]: I1212 22:23:14.484945    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r76qq\" (UniqueName: \"kubernetes.io/projected/2f3d30a6-ac6a-474d-918d-dab3da829a79-kube-api-access-r76qq\") pod \"busybox-5bc68d56bd-bbwmj\" (UID: \"2f3d30a6-ac6a-474d-918d-dab3da829a79\") " pod="default/busybox-5bc68d56bd-bbwmj"
	Dec 12 22:23:14 multinode-764961 kubelet[1592]: W1212 22:23:14.648505    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio-648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57 WatchSource:0}: Error finding container 648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57: Status 404 returned error can't find the container with id 648e9c0281c20cfc345256b3e9f3f5d9c26516e3211e3cf4def7aa81000aaa57
	Dec 12 22:23:15 multinode-764961 kubelet[1592]: I1212 22:23:15.684670    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-bbwmj" podStartSLOduration=1.079518448 podCreationTimestamp="2023-12-12 22:23:14 +0000 UTC" firstStartedPulling="2023-12-12 22:23:14.652266547 +0000 UTC m=+97.358328705" lastFinishedPulling="2023-12-12 22:23:15.257380384 +0000 UTC m=+97.963442538" observedRunningTime="2023-12-12 22:23:15.684394257 +0000 UTC m=+98.390456433" watchObservedRunningTime="2023-12-12 22:23:15.684632281 +0000 UTC m=+98.390694453"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-764961 -n multinode-764961
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-764961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.536734314.exe start -p running-upgrade-424166 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.536734314.exe start -p running-upgrade-424166 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.880868781s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-424166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-424166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.306060939s)

                                                
                                                
-- stdout --
	* [running-upgrade-424166] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-424166 in cluster running-upgrade-424166
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Updating the running docker "running-upgrade-424166" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:34:47.086289  184531 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:34:47.087236  184531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:34:47.087258  184531 out.go:309] Setting ErrFile to fd 2...
	I1212 22:34:47.087269  184531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:34:47.087708  184531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:34:47.088440  184531 out.go:303] Setting JSON to false
	I1212 22:34:47.089662  184531 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4639,"bootTime":1702415848,"procs":531,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:34:47.089730  184531 start.go:138] virtualization: kvm guest
	I1212 22:34:47.092398  184531 out.go:177] * [running-upgrade-424166] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:34:47.095623  184531 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:34:47.094343  184531 notify.go:220] Checking for updates...
	I1212 22:34:47.098666  184531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:34:47.101548  184531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:34:47.104199  184531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:34:47.106825  184531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:34:47.110139  184531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:34:47.112008  184531 config.go:182] Loaded profile config "running-upgrade-424166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1212 22:34:47.112041  184531 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:34:47.114320  184531 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 22:34:47.115794  184531 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:34:47.156139  184531 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:34:47.156309  184531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:34:47.230008  184531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:true NGoroutines:77 SystemTime:2023-12-12 22:34:47.218403943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:34:47.230146  184531 docker.go:295] overlay module found
	I1212 22:34:47.232241  184531 out.go:177] * Using the docker driver based on existing profile
	I1212 22:34:47.233607  184531 start.go:298] selected driver: docker
	I1212 22:34:47.233630  184531 start.go:902] validating driver "docker" against &{Name:running-upgrade-424166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-424166 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 22:34:47.233758  184531 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:34:47.234934  184531 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:34:47.311598  184531 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:78 SystemTime:2023-12-12 22:34:47.302405489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:34:47.312024  184531 cni.go:84] Creating CNI manager for ""
	I1212 22:34:47.312054  184531 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 22:34:47.312070  184531 start_flags.go:323] config:
	{Name:running-upgrade-424166 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-424166 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1212 22:34:47.315645  184531 out.go:177] * Starting control plane node running-upgrade-424166 in cluster running-upgrade-424166
	I1212 22:34:47.317024  184531 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:34:47.318636  184531 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:34:47.319889  184531 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1212 22:34:47.319922  184531 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:34:47.338166  184531 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 22:34:47.338201  184531 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	W1212 22:34:47.352558  184531 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 22:34:47.352776  184531 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/running-upgrade-424166/config.json ...
	I1212 22:34:47.352777  184531 cache.go:107] acquiring lock: {Name:mkc7a5361770c7eec24a43c81e5b3a67f4dbf919 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.352876  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 22:34:47.352889  184531 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 124.785µs
	I1212 22:34:47.352903  184531 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 22:34:47.352919  184531 cache.go:107] acquiring lock: {Name:mkae0a16501e5d1155ea9c13eef36073453c6edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.352960  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1212 22:34:47.352975  184531 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 54.41µs
	I1212 22:34:47.352986  184531 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1212 22:34:47.352999  184531 cache.go:107] acquiring lock: {Name:mkb2d499d7baa1a7890fcf30c0249f78c616085a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353037  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1212 22:34:47.353047  184531 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 50.285µs
	I1212 22:34:47.353056  184531 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1212 22:34:47.353054  184531 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:34:47.353084  184531 cache.go:107] acquiring lock: {Name:mkb458373523ace1a154357cd802b5498de68477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353120  184531 cache.go:107] acquiring lock: {Name:mk5efc1c8e09180eb113c432905ea5ccf2e50024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353151  184531 cache.go:107] acquiring lock: {Name:mkff42b03f07b9a18b752457f1f96f95ebd0c7f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353201  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 22:34:47.353182  184531 cache.go:107] acquiring lock: {Name:mk981b7caf8ee1fba7ca76f16e274710a59aff06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353216  184531 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 66.08µs
	I1212 22:34:47.353232  184531 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 22:34:47.353239  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1212 22:34:47.353247  184531 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 64.131µs
	I1212 22:34:47.353260  184531 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1212 22:34:47.353138  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1212 22:34:47.353273  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1212 22:34:47.353287  184531 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 170.22µs
	I1212 22:34:47.353300  184531 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1212 22:34:47.353250  184531 cache.go:107] acquiring lock: {Name:mk0dc4745f3b1f54d8f24c5150d46ba5b4dce402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353332  184531 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1212 22:34:47.353342  184531 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 94.687µs
	I1212 22:34:47.353351  184531 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1212 22:34:47.353273  184531 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 190.846µs
	I1212 22:34:47.353373  184531 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1212 22:34:47.353407  184531 cache.go:87] Successfully saved all images to host disk.
	I1212 22:34:47.353087  184531 start.go:365] acquiring machines lock for running-upgrade-424166: {Name:mk05c2f733782927cdc60d3bc5317a64f57600bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:34:47.353497  184531 start.go:369] acquired machines lock for "running-upgrade-424166" in 77.66µs
	I1212 22:34:47.353527  184531 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:34:47.353534  184531 fix.go:54] fixHost starting: m01
	I1212 22:34:47.353793  184531 cli_runner.go:164] Run: docker container inspect running-upgrade-424166 --format={{.State.Status}}
	I1212 22:34:47.380313  184531 fix.go:102] recreateIfNeeded on running-upgrade-424166: state=Running err=<nil>
	W1212 22:34:47.380345  184531 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:34:47.382094  184531 out.go:177] * Updating the running docker "running-upgrade-424166" container ...
	I1212 22:34:47.383771  184531 machine.go:88] provisioning docker machine ...
	I1212 22:34:47.383801  184531 ubuntu.go:169] provisioning hostname "running-upgrade-424166"
	I1212 22:34:47.383876  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:47.408726  184531 main.go:141] libmachine: Using SSH client type: native
	I1212 22:34:47.409257  184531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I1212 22:34:47.409278  184531 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-424166 && echo "running-upgrade-424166" | sudo tee /etc/hostname
	I1212 22:34:47.532659  184531 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-424166
	
	I1212 22:34:47.532723  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:47.556291  184531 main.go:141] libmachine: Using SSH client type: native
	I1212 22:34:47.556814  184531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I1212 22:34:47.556846  184531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-424166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-424166/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-424166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:34:47.668855  184531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:34:47.668880  184531 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:34:47.668899  184531 ubuntu.go:177] setting up certificates
	I1212 22:34:47.668925  184531 provision.go:83] configureAuth start
	I1212 22:34:47.668986  184531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-424166
	I1212 22:34:47.690839  184531 provision.go:138] copyHostCerts
	I1212 22:34:47.690895  184531 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem, removing ...
	I1212 22:34:47.690907  184531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:34:47.691479  184531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:34:47.691661  184531 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem, removing ...
	I1212 22:34:47.691673  184531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:34:47.691711  184531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:34:47.691781  184531 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem, removing ...
	I1212 22:34:47.691790  184531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:34:47.691821  184531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:34:47.691869  184531 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-424166 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-424166]
	I1212 22:34:47.837424  184531 provision.go:172] copyRemoteCerts
	I1212 22:34:47.837495  184531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:34:47.837561  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:47.854961  184531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/running-upgrade-424166/id_rsa Username:docker}
	I1212 22:34:47.942916  184531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:34:47.961889  184531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 22:34:47.982882  184531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:34:48.000859  184531 provision.go:86] duration metric: configureAuth took 331.921179ms
	I1212 22:34:48.000883  184531 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:34:48.001059  184531 config.go:182] Loaded profile config "running-upgrade-424166": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1212 22:34:48.001161  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:48.025029  184531 main.go:141] libmachine: Using SSH client type: native
	I1212 22:34:48.025478  184531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32941 <nil> <nil>}
	I1212 22:34:48.025519  184531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:34:48.451007  184531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:34:48.451030  184531 machine.go:91] provisioned docker machine in 1.067241599s
	I1212 22:34:48.451042  184531 start.go:300] post-start starting for "running-upgrade-424166" (driver="docker")
	I1212 22:34:48.451054  184531 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:34:48.451115  184531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:34:48.451166  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:48.470402  184531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/running-upgrade-424166/id_rsa Username:docker}
	I1212 22:34:48.551310  184531 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:34:48.554264  184531 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:34:48.554339  184531 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:34:48.554356  184531 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:34:48.554369  184531 info.go:137] Remote host: Ubuntu 19.10
	I1212 22:34:48.554384  184531 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:34:48.554445  184531 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:34:48.554539  184531 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> 163992.pem in /etc/ssl/certs
	I1212 22:34:48.554642  184531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:34:48.561552  184531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:34:48.579377  184531 start.go:303] post-start completed in 128.323347ms
	I1212 22:34:48.579443  184531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:34:48.579485  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:48.599509  184531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/running-upgrade-424166/id_rsa Username:docker}
	I1212 22:34:48.680142  184531 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:34:48.684200  184531 fix.go:56] fixHost completed within 1.330663263s
	I1212 22:34:48.684224  184531 start.go:83] releasing machines lock for "running-upgrade-424166", held for 1.330713332s
	I1212 22:34:48.684293  184531 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-424166
	I1212 22:34:48.702107  184531 ssh_runner.go:195] Run: cat /version.json
	I1212 22:34:48.702149  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:48.702255  184531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:34:48.702320  184531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-424166
	I1212 22:34:48.727842  184531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/running-upgrade-424166/id_rsa Username:docker}
	I1212 22:34:48.729920  184531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32941 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/running-upgrade-424166/id_rsa Username:docker}
	W1212 22:34:48.844006  184531 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 22:34:48.844101  184531 ssh_runner.go:195] Run: systemctl --version
	I1212 22:34:48.848362  184531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:34:48.906794  184531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:34:48.911306  184531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:34:48.926330  184531 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:34:48.926434  184531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:34:48.949112  184531 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:34:48.949132  184531 start.go:475] detecting cgroup driver to use...
	I1212 22:34:48.949159  184531 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:34:48.949194  184531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:34:48.970460  184531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:34:48.980303  184531 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:34:48.980357  184531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:34:48.990393  184531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:34:48.999237  184531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 22:34:49.009985  184531 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 22:34:49.010044  184531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:34:49.092626  184531 docker.go:219] disabling docker service ...
	I1212 22:34:49.092681  184531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:34:49.102305  184531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:34:49.112417  184531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:34:49.193224  184531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:34:49.274759  184531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:34:49.288087  184531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:34:49.301865  184531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 22:34:49.301990  184531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:34:49.313382  184531 out.go:177] 
	W1212 22:34:49.314789  184531 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 22:34:49.314811  184531 out.go:239] * 
	* 
	W1212 22:34:49.316030  184531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 22:34:49.317451  184531 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-424166 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-12 22:34:49.340462092 +0000 UTC m=+1942.251379617
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-424166
helpers_test.go:235: (dbg) docker inspect running-upgrade-424166:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b",
	        "Created": "2023-12-12T22:33:37.617369209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-12T22:33:38.7406071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b/hosts",
	        "LogPath": "/var/lib/docker/containers/6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b/6e1aa2bf14b2f08445433a85de91365ef9070badceb4cbd4d0e5455710fa531b-json.log",
	        "Name": "/running-upgrade-424166",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-424166:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/29183b7765f456a3adf9d9b182777674e6f50efd1a8d23775bebdde9f973fa35-init/diff:/var/lib/docker/overlay2/44aa0f4708817c1812679a8a80198c3d281de8fd99c0f47a9363d121e412aa19/diff:/var/lib/docker/overlay2/fca8a6fdaf1f6cb00105d857fefae1563db119709165e213ddcf4beefd0fb4ab/diff:/var/lib/docker/overlay2/2f31d842eb0304ce8d8ec6b66c56f1b82315ce631348d3949dc4590c96c7951c/diff:/var/lib/docker/overlay2/b6ff6f2dc03de9ed0e23f06af6c22a62d25aca77e23704521fad1d0d5785d904/diff:/var/lib/docker/overlay2/823997e4ad04da6126deac4400093ba9bea4682a7de451ea7fdddb93c1cc12fd/diff:/var/lib/docker/overlay2/4a9cc9dd93e3c1cbd2925d6a827edc19c75cdfb524aba37d60896670f2daf3b8/diff:/var/lib/docker/overlay2/172dfe7e1a3392b8ef57e263205adc00f5b321459b0fc1e427c68ae970df4e7a/diff:/var/lib/docker/overlay2/6dc82f4473cdc64395fcb00838ea40266a021897dae4f63654a8c2ee505980f1/diff:/var/lib/docker/overlay2/0038df47c5c23c2fa12b9ba19452546060b7412179ab71a16d0ba587cada376b/diff:/var/lib/docker/overlay2/d67373
4624182f2b084dc6aa896213740fd2c22d5768f88b6ccaaa2d0af4e2e3/diff:/var/lib/docker/overlay2/d10dea6c4aad3cebdbb110855d8f71612e65c1a3be3dfad8a5b13e9f78a3288b/diff:/var/lib/docker/overlay2/f54ebb53d86a63abdf648d8b7a6b06dbc504e745aeecdecde9e8125f86a6ef3f/diff:/var/lib/docker/overlay2/a6fb64b5ceb2da8c23016c7eb424e8a780f6afdd68e0da01be0ff5496c66273d/diff:/var/lib/docker/overlay2/e807deaa20bea087abfc2dda45d3f6a8b92a2a440afed27fe531b4ac345b3430/diff:/var/lib/docker/overlay2/2646e4653969d5007ef266d230e9aa14a0c3aff781c1216a778ff16157fd912a/diff:/var/lib/docker/overlay2/07d787e4e4a4e92773ceee39decb8b2482cf95c271417f498713b703e998db7c/diff:/var/lib/docker/overlay2/f978446e81987fd993c522d363de9b5b8ad463a15d0e74017b4957c5c7fb5cdf/diff:/var/lib/docker/overlay2/31e4ff032a51ace79cb695390c3b178046b15c4f8ac7830e92df071d33966512/diff:/var/lib/docker/overlay2/45f6a5d3e265c5c94bfd25ed1323a87f3d9341fa1bcc56988ac43a7f306871d2/diff:/var/lib/docker/overlay2/2e96f55325716c39f711f32312eaf185ea5d6f7058780566cb6fba49bc2c7afd/diff:/var/lib/d
ocker/overlay2/3e802cbda57a0074bb591d6158f0ae41ead7da265124fdb1260f737758ae27a0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29183b7765f456a3adf9d9b182777674e6f50efd1a8d23775bebdde9f973fa35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29183b7765f456a3adf9d9b182777674e6f50efd1a8d23775bebdde9f973fa35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29183b7765f456a3adf9d9b182777674e6f50efd1a8d23775bebdde9f973fa35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-424166",
	                "Source": "/var/lib/docker/volumes/running-upgrade-424166/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-424166",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-424166",
	                "name.minikube.sigs.k8s.io": "running-upgrade-424166",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b5336101690458b7cd39ccc0a2190c965db6740d45fc539d7978d2967c5d4f3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32941"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32939"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5b5336101690",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "580b1cab7bc91e97202e86a5ef7c839be03e5f1db0b06e38a2496bb955d57e66",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "b7528ee9a6f0f12cd1e26a91dbc69ad485262642abb8824a239cb4b6b1dc5e43",
	                    "EndpointID": "580b1cab7bc91e97202e86a5ef7c839be03e5f1db0b06e38a2496bb955d57e66",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-424166 -n running-upgrade-424166
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-424166 -n running-upgrade-424166: exit status 4 (380.853197ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:34:49.701620  185835 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-424166" does not appear in /home/jenkins/minikube-integration/17761-9643/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-424166" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-424166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-424166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-424166: (2.056408707s)
--- FAIL: TestRunningBinaryUpgrade (75.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.1759263115.exe start -p stopped-upgrade-323945 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.1759263115.exe start -p stopped-upgrade-323945 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m27.376661096s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.1759263115.exe -p stopped-upgrade-323945 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.1759263115.exe -p stopped-upgrade-323945 stop: (1.039693295s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-323945 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-323945 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.889298228s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-323945] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-323945 in cluster stopped-upgrade-323945
	* Pulling base image v0.0.42-1702394725-17761 ...
	* Restarting existing docker container for "stopped-upgrade-323945" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:33:57.586783  172151 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:33:57.586894  172151 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:33:57.586906  172151 out.go:309] Setting ErrFile to fd 2...
	I1212 22:33:57.586912  172151 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:33:57.587132  172151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:33:57.587716  172151 out.go:303] Setting JSON to false
	I1212 22:33:57.588871  172151 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4590,"bootTime":1702415848,"procs":475,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:33:57.588935  172151 start.go:138] virtualization: kvm guest
	I1212 22:33:57.591531  172151 out.go:177] * [stopped-upgrade-323945] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:33:57.593141  172151 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:33:57.593210  172151 notify.go:220] Checking for updates...
	I1212 22:33:57.594630  172151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:33:57.596238  172151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:33:57.597693  172151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:33:57.599067  172151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:33:57.600464  172151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:33:57.602417  172151 config.go:182] Loaded profile config "stopped-upgrade-323945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1212 22:33:57.602460  172151 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517
	I1212 22:33:57.604667  172151 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1212 22:33:57.605954  172151 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:33:57.646859  172151 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:33:57.646983  172151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:33:57.700921  172151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:82 SystemTime:2023-12-12 22:33:57.692111882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:33:57.701064  172151 docker.go:295] overlay module found
	I1212 22:33:57.704023  172151 out.go:177] * Using the docker driver based on existing profile
	I1212 22:33:57.705434  172151 start.go:298] selected driver: docker
	I1212 22:33:57.705456  172151 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-323945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-323945 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1212 22:33:57.705573  172151 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:33:57.706451  172151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:33:57.781537  172151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:82 SystemTime:2023-12-12 22:33:57.77237805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:33:57.781828  172151 cni.go:84] Creating CNI manager for ""
	I1212 22:33:57.781848  172151 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1212 22:33:57.781856  172151 start_flags.go:323] config:
	{Name:stopped-upgrade-323945 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-323945 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1212 22:33:57.817101  172151 out.go:177] * Starting control plane node stopped-upgrade-323945 in cluster stopped-upgrade-323945
	I1212 22:33:57.818589  172151 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:33:57.820212  172151 out.go:177] * Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:33:57.821652  172151 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1212 22:33:57.821763  172151 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:33:57.844721  172151 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon, skipping pull
	I1212 22:33:57.844751  172151 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in daemon, skipping load
	W1212 22:33:57.870343  172151 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1212 22:33:57.870562  172151 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/stopped-upgrade-323945/config.json ...
	I1212 22:33:57.870618  172151 cache.go:107] acquiring lock: {Name:mk0dc4745f3b1f54d8f24c5150d46ba5b4dce402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870665  172151 cache.go:107] acquiring lock: {Name:mkb2d499d7baa1a7890fcf30c0249f78c616085a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870686  172151 cache.go:107] acquiring lock: {Name:mkff42b03f07b9a18b752457f1f96f95ebd0c7f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870737  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1212 22:33:57.870750  172151 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 135.04µs
	I1212 22:33:57.870736  172151 cache.go:107] acquiring lock: {Name:mk981b7caf8ee1fba7ca76f16e274710a59aff06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870788  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1212 22:33:57.870774  172151 cache.go:107] acquiring lock: {Name:mk5efc1c8e09180eb113c432905ea5ccf2e50024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870774  172151 cache.go:107] acquiring lock: {Name:mkb458373523ace1a154357cd802b5498de68477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870802  172151 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 63.673µs
	I1212 22:33:57.870811  172151 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1212 22:33:57.870760  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1212 22:33:57.870821  172151 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 161.943µs
	I1212 22:33:57.870829  172151 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1212 22:33:57.870829  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1212 22:33:57.870839  172151 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 67.699µs
	I1212 22:33:57.870841  172151 cache.go:194] Successfully downloaded all kic artifacts
	I1212 22:33:57.870847  172151 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1212 22:33:57.870845  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1212 22:33:57.870860  172151 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 175.466µs
	I1212 22:33:57.870870  172151 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1212 22:33:57.870866  172151 start.go:365] acquiring machines lock for stopped-upgrade-323945: {Name:mka330eba56f264eb6e6d1d0f9b754b5a3a534b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870836  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1212 22:33:57.870761  172151 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1212 22:33:57.870614  172151 cache.go:107] acquiring lock: {Name:mkae0a16501e5d1155ea9c13eef36073453c6edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870958  172151 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 170.289µs
	I1212 22:33:57.871086  172151 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1212 22:33:57.870612  172151 cache.go:107] acquiring lock: {Name:mkc7a5361770c7eec24a43c81e5b3a67f4dbf919 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 22:33:57.870930  172151 start.go:369] acquired machines lock for "stopped-upgrade-323945" in 47.626µs
	I1212 22:33:57.871222  172151 start.go:96] Skipping create...Using existing machine configuration
	I1212 22:33:57.871225  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 22:33:57.871233  172151 fix.go:54] fixHost starting: m01
	I1212 22:33:57.871237  172151 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 640.773µs
	I1212 22:33:57.871247  172151 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 22:33:57.871093  172151 cache.go:115] /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1212 22:33:57.871292  172151 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 694.494µs
	I1212 22:33:57.871303  172151 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1212 22:33:57.871311  172151 cache.go:87] Successfully saved all images to host disk.
	I1212 22:33:57.871516  172151 cli_runner.go:164] Run: docker container inspect stopped-upgrade-323945 --format={{.State.Status}}
	I1212 22:33:57.891282  172151 fix.go:102] recreateIfNeeded on stopped-upgrade-323945: state=Stopped err=<nil>
	W1212 22:33:57.891316  172151 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 22:33:57.893504  172151 out.go:177] * Restarting existing docker container for "stopped-upgrade-323945" ...
	I1212 22:33:57.894914  172151 cli_runner.go:164] Run: docker start stopped-upgrade-323945
	I1212 22:33:58.211263  172151 cli_runner.go:164] Run: docker container inspect stopped-upgrade-323945 --format={{.State.Status}}
	I1212 22:33:58.232001  172151 kic.go:430] container "stopped-upgrade-323945" state is running.
	I1212 22:33:58.323403  172151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-323945
	I1212 22:33:58.343917  172151 profile.go:148] Saving config to /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/stopped-upgrade-323945/config.json ...
	I1212 22:33:58.445411  172151 machine.go:88] provisioning docker machine ...
	I1212 22:33:58.445447  172151 ubuntu.go:169] provisioning hostname "stopped-upgrade-323945"
	I1212 22:33:58.445498  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:33:58.466907  172151 main.go:141] libmachine: Using SSH client type: native
	I1212 22:33:58.467275  172151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I1212 22:33:58.467291  172151 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-323945 && echo "stopped-upgrade-323945" | sudo tee /etc/hostname
	I1212 22:33:58.468074  172151 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56822->127.0.0.1:32944: read: connection reset by peer
	I1212 22:34:01.606826  172151 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-323945
	
	I1212 22:34:01.606917  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:01.626884  172151 main.go:141] libmachine: Using SSH client type: native
	I1212 22:34:01.627285  172151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I1212 22:34:01.627315  172151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-323945' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-323945/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-323945' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 22:34:01.743531  172151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 22:34:01.743578  172151 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17761-9643/.minikube CaCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17761-9643/.minikube}
	I1212 22:34:01.743599  172151 ubuntu.go:177] setting up certificates
	I1212 22:34:01.743608  172151 provision.go:83] configureAuth start
	I1212 22:34:01.743661  172151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-323945
	I1212 22:34:01.763004  172151 provision.go:138] copyHostCerts
	I1212 22:34:01.763070  172151 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem, removing ...
	I1212 22:34:01.763087  172151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem
	I1212 22:34:01.763162  172151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/ca.pem (1082 bytes)
	I1212 22:34:01.763272  172151 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem, removing ...
	I1212 22:34:01.763281  172151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem
	I1212 22:34:01.763314  172151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/cert.pem (1123 bytes)
	I1212 22:34:01.763397  172151 exec_runner.go:144] found /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem, removing ...
	I1212 22:34:01.763407  172151 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem
	I1212 22:34:01.763439  172151 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17761-9643/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17761-9643/.minikube/key.pem (1675 bytes)
	I1212 22:34:01.763538  172151 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-323945 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-323945]
	I1212 22:34:01.848026  172151 provision.go:172] copyRemoteCerts
	I1212 22:34:01.848116  172151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 22:34:01.848166  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:01.864868  172151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/stopped-upgrade-323945/id_rsa Username:docker}
	I1212 22:34:01.954613  172151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 22:34:01.993425  172151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 22:34:02.012158  172151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 22:34:02.037944  172151 provision.go:86] duration metric: configureAuth took 294.323762ms
	I1212 22:34:02.037988  172151 ubuntu.go:193] setting minikube options for container-runtime
	I1212 22:34:02.038172  172151 config.go:182] Loaded profile config "stopped-upgrade-323945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1212 22:34:02.038278  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:02.059998  172151 main.go:141] libmachine: Using SSH client type: native
	I1212 22:34:02.060527  172151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 127.0.0.1 32944 <nil> <nil>}
	I1212 22:34:02.060550  172151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 22:34:02.620717  172151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 22:34:02.620763  172151 machine.go:91] provisioned docker machine in 4.175326311s
	I1212 22:34:02.620775  172151 start.go:300] post-start starting for "stopped-upgrade-323945" (driver="docker")
	I1212 22:34:02.620787  172151 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 22:34:02.620836  172151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 22:34:02.620869  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:02.638506  172151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/stopped-upgrade-323945/id_rsa Username:docker}
	I1212 22:34:02.718960  172151 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 22:34:02.721724  172151 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1212 22:34:02.721744  172151 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 22:34:02.721755  172151 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1212 22:34:02.721763  172151 info.go:137] Remote host: Ubuntu 19.10
	I1212 22:34:02.721783  172151 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/addons for local assets ...
	I1212 22:34:02.721853  172151 filesync.go:126] Scanning /home/jenkins/minikube-integration/17761-9643/.minikube/files for local assets ...
	I1212 22:34:02.721945  172151 filesync.go:149] local asset: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem -> 163992.pem in /etc/ssl/certs
	I1212 22:34:02.722053  172151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 22:34:02.728507  172151 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/ssl/certs/163992.pem --> /etc/ssl/certs/163992.pem (1708 bytes)
	I1212 22:34:02.744886  172151 start.go:303] post-start completed in 124.098317ms
	I1212 22:34:02.744959  172151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:34:02.745011  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:02.761912  172151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/stopped-upgrade-323945/id_rsa Username:docker}
	I1212 22:34:02.840194  172151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 22:34:02.844076  172151 fix.go:56] fixHost completed within 4.972839102s
	I1212 22:34:02.844101  172151 start.go:83] releasing machines lock for "stopped-upgrade-323945", held for 4.972891575s
	I1212 22:34:02.844179  172151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-323945
	I1212 22:34:02.860731  172151 ssh_runner.go:195] Run: cat /version.json
	I1212 22:34:02.860779  172151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 22:34:02.860784  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:02.860833  172151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-323945
	I1212 22:34:02.879821  172151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/stopped-upgrade-323945/id_rsa Username:docker}
	I1212 22:34:02.881344  172151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32944 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/stopped-upgrade-323945/id_rsa Username:docker}
	W1212 22:34:02.988760  172151 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1212 22:34:02.988835  172151 ssh_runner.go:195] Run: systemctl --version
	I1212 22:34:02.992531  172151 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 22:34:03.046142  172151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 22:34:03.050423  172151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:34:03.064794  172151 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1212 22:34:03.064881  172151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 22:34:03.085629  172151 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 22:34:03.085652  172151 start.go:475] detecting cgroup driver to use...
	I1212 22:34:03.085684  172151 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1212 22:34:03.085730  172151 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 22:34:03.104367  172151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 22:34:03.112647  172151 docker.go:203] disabling cri-docker service (if available) ...
	I1212 22:34:03.112701  172151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 22:34:03.121581  172151 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 22:34:03.129770  172151 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1212 22:34:03.138020  172151 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1212 22:34:03.138066  172151 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 22:34:03.203139  172151 docker.go:219] disabling docker service ...
	I1212 22:34:03.203210  172151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 22:34:03.212775  172151 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 22:34:03.221263  172151 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 22:34:03.283744  172151 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 22:34:03.349870  172151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 22:34:03.359384  172151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 22:34:03.372053  172151 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 22:34:03.372108  172151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 22:34:03.381816  172151 out.go:177] 
	W1212 22:34:03.383479  172151 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1212 22:34:03.383543  172151 out.go:239] * 
	* 
	W1212 22:34:03.384566  172151 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 22:34:03.386224  172151 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-323945 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (94.31s)

                                                
                                    

Test pass (282/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.59
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 6.25
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 9.52
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.2
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
25 TestDownloadOnlyKic 1.27
26 TestBinaryMirror 0.72
27 TestOffline 87.4
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 130.47
34 TestAddons/parallel/Registry 15.45
36 TestAddons/parallel/InspektorGadget 10.63
37 TestAddons/parallel/MetricsServer 5.66
38 TestAddons/parallel/HelmTiller 9.23
40 TestAddons/parallel/CSI 43.32
42 TestAddons/parallel/CloudSpanner 5.55
43 TestAddons/parallel/LocalPath 55.88
44 TestAddons/parallel/NvidiaDevicePlugin 5.49
47 TestAddons/serial/GCPAuth/Namespaces 0.11
48 TestAddons/StoppedEnableDisable 12.16
49 TestCertOptions 30.51
50 TestCertExpiration 221.97
52 TestForceSystemdFlag 29.98
53 TestForceSystemdEnv 29.84
55 TestKVMDriverInstallOrUpdate 2.92
59 TestErrorSpam/setup 23.65
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.86
62 TestErrorSpam/pause 1.48
63 TestErrorSpam/unpause 1.48
64 TestErrorSpam/stop 1.39
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.9
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 35.3
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.85
76 TestFunctional/serial/CacheCmd/cache/add_local 1.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 32.75
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.31
87 TestFunctional/serial/LogsFileCmd 1.33
88 TestFunctional/serial/InvalidService 4.05
90 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DashboardCmd 11.16
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 0.95
98 TestFunctional/parallel/ServiceCmdConnect 7.64
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 24.77
102 TestFunctional/parallel/SSHCmd 0.67
103 TestFunctional/parallel/CpCmd 2.21
104 TestFunctional/parallel/MySQL 19.19
105 TestFunctional/parallel/FileSync 0.3
106 TestFunctional/parallel/CertSync 1.82
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.16
115 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.43
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 0.88
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.48
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.5
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.56
127 TestFunctional/parallel/ImageCommands/ImageBuild 7.23
128 TestFunctional/parallel/ImageCommands/Setup 0.91
129 TestFunctional/parallel/ServiceCmd/List 0.52
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.79
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
133 TestFunctional/parallel/ServiceCmd/Format 0.46
134 TestFunctional/parallel/ServiceCmd/URL 0.42
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
142 TestFunctional/parallel/MountCmd/any-port 7.28
143 TestFunctional/parallel/ProfileCmd/profile_list 0.4
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.93
147 TestFunctional/parallel/MountCmd/specific-port 2.28
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.35
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.16
150 TestFunctional/parallel/ImageCommands/ImageRemove 2.17
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.08
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.01
158 TestFunctional/delete_minikube_cached_images 0.01
162 TestIngressAddonLegacy/StartLegacyK8sCluster 67.61
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.78
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
169 TestJSONOutput/start/Command 48.91
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.63
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.58
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.76
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.22
194 TestKicCustomNetwork/create_custom_network 34.92
195 TestKicCustomNetwork/use_default_bridge_network 24.05
196 TestKicExistingNetwork 26.76
197 TestKicCustomSubnet 26.95
198 TestKicStaticIP 27.07
199 TestMainNoArgs 0.06
200 TestMinikubeProfile 53.02
203 TestMountStart/serial/StartWithMountFirst 8.27
204 TestMountStart/serial/VerifyMountFirst 0.25
205 TestMountStart/serial/StartWithMountSecond 7.89
206 TestMountStart/serial/VerifyMountSecond 0.25
207 TestMountStart/serial/DeleteFirst 1.6
208 TestMountStart/serial/VerifyMountPostDelete 0.24
209 TestMountStart/serial/Stop 1.19
210 TestMountStart/serial/RestartStopped 6.85
211 TestMountStart/serial/VerifyMountPostStop 0.25
214 TestMultiNode/serial/FreshStart2Nodes 117.83
215 TestMultiNode/serial/DeployApp2Nodes 4.05
217 TestMultiNode/serial/AddNode 45.79
218 TestMultiNode/serial/MultiNodeLabels 0.06
219 TestMultiNode/serial/ProfileList 0.27
220 TestMultiNode/serial/CopyFile 8.95
221 TestMultiNode/serial/StopNode 2.09
222 TestMultiNode/serial/StartAfterStop 10.59
223 TestMultiNode/serial/RestartKeepsNodes 111.8
224 TestMultiNode/serial/DeleteNode 4.63
225 TestMultiNode/serial/StopMultiNode 23.84
226 TestMultiNode/serial/RestartMultiNode 77.35
227 TestMultiNode/serial/ValidateNameConflict 22.62
232 TestPreload 123.8
234 TestScheduledStopUnix 96.8
237 TestInsufficientStorage 13.08
240 TestKubernetesUpgrade 349.3
241 TestMissingContainerUpgrade 168.21
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
244 TestStoppedBinaryUpgrade/Setup 0.48
245 TestNoKubernetes/serial/StartWithK8s 37.31
247 TestNoKubernetes/serial/StartWithStopK8s 10.11
248 TestNoKubernetes/serial/Start 6.48
249 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
250 TestNoKubernetes/serial/ProfileList 1.38
251 TestNoKubernetes/serial/Stop 1.24
252 TestNoKubernetes/serial/StartNoArgs 9.29
253 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.53
263 TestPause/serial/Start 79.74
271 TestNetworkPlugins/group/false 3.52
275 TestPause/serial/SecondStartNoReconfiguration 32.7
276 TestPause/serial/Pause 0.66
277 TestPause/serial/VerifyStatus 0.34
278 TestPause/serial/Unpause 0.69
279 TestPause/serial/PauseAgain 0.83
280 TestPause/serial/DeletePaused 4.21
281 TestPause/serial/VerifyDeletedResources 0.56
283 TestStartStop/group/old-k8s-version/serial/FirstStart 129.76
285 TestStartStop/group/no-preload/serial/FirstStart 67.58
286 TestStartStop/group/no-preload/serial/DeployApp 7.74
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
288 TestStartStop/group/no-preload/serial/Stop 11.9
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
290 TestStartStop/group/no-preload/serial/SecondStart 339.87
291 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
293 TestStartStop/group/old-k8s-version/serial/Stop 11.85
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
295 TestStartStop/group/old-k8s-version/serial/SecondStart 419.66
297 TestStartStop/group/embed-certs/serial/FirstStart 41.76
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.54
300 TestStartStop/group/embed-certs/serial/DeployApp 9.36
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
302 TestStartStop/group/embed-certs/serial/Stop 11.91
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/embed-certs/serial/SecondStart 344.05
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.89
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 339.07
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
313 TestStartStop/group/no-preload/serial/Pause 2.68
315 TestStartStop/group/newest-cni/serial/FirstStart 35.77
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
318 TestStartStop/group/newest-cni/serial/Stop 2.01
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/newest-cni/serial/SecondStart 26.04
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/newest-cni/serial/Pause 2.46
325 TestNetworkPlugins/group/auto/Start 45
326 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
328 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
329 TestStartStop/group/old-k8s-version/serial/Pause 2.85
330 TestNetworkPlugins/group/kindnet/Start 69.46
331 TestNetworkPlugins/group/auto/KubeletFlags 0.31
332 TestNetworkPlugins/group/auto/NetCatPod 9.34
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
335 TestNetworkPlugins/group/auto/DNS 0.15
336 TestNetworkPlugins/group/auto/Localhost 0.13
337 TestNetworkPlugins/group/auto/HairPin 0.14
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
339 TestStartStop/group/embed-certs/serial/Pause 2.77
340 TestNetworkPlugins/group/calico/Start 67.78
341 TestNetworkPlugins/group/custom-flannel/Start 60.17
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.06
343 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
344 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
345 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.88
349 TestNetworkPlugins/group/kindnet/DNS 0.16
350 TestNetworkPlugins/group/kindnet/Localhost 0.14
351 TestNetworkPlugins/group/kindnet/HairPin 0.13
352 TestNetworkPlugins/group/calico/ControllerPod 5.03
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.51
354 TestNetworkPlugins/group/calico/KubeletFlags 0.55
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
356 TestNetworkPlugins/group/calico/NetCatPod 11.38
357 TestNetworkPlugins/group/enable-default-cni/Start 44.52
358 TestNetworkPlugins/group/custom-flannel/DNS 0.16
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
360 TestNetworkPlugins/group/calico/DNS 0.16
361 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
362 TestNetworkPlugins/group/calico/Localhost 0.13
363 TestNetworkPlugins/group/calico/HairPin 0.14
364 TestNetworkPlugins/group/flannel/Start 60.2
365 TestNetworkPlugins/group/bridge/Start 79.77
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
371 TestNetworkPlugins/group/flannel/ControllerPod 5.02
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
373 TestNetworkPlugins/group/flannel/NetCatPod 10.23
374 TestNetworkPlugins/group/flannel/DNS 0.14
375 TestNetworkPlugins/group/flannel/Localhost 0.11
376 TestNetworkPlugins/group/flannel/HairPin 0.12
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
378 TestNetworkPlugins/group/bridge/NetCatPod 9.26
379 TestNetworkPlugins/group/bridge/DNS 0.15
380 TestNetworkPlugins/group/bridge/Localhost 0.12
381 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (8.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.592040997s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-479271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-479271: exit status 85 (73.077052ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:27.185910   16410 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:27.186063   16410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:27.186071   16410 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:27.186076   16410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:27.186244   16410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	W1212 22:02:27.186357   16410 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: no such file or directory
	I1212 22:02:27.186893   16410 out.go:303] Setting JSON to true
	I1212 22:02:27.187731   16410 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2699,"bootTime":1702415848,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:27.187792   16410 start.go:138] virtualization: kvm guest
	I1212 22:02:27.190674   16410 out.go:97] [download-only-479271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:27.192324   16410 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:27.190792   16410 notify.go:220] Checking for updates...
	W1212 22:02:27.190827   16410 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 22:02:27.195248   16410 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:27.196647   16410 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:02:27.198060   16410 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:02:27.199545   16410 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:27.202453   16410 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:27.202650   16410 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:02:27.223635   16410 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:02:27.223751   16410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:27.576988   16410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 22:02:27.569163204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:27.577083   16410 docker.go:295] overlay module found
	I1212 22:02:27.578807   16410 out.go:97] Using the docker driver based on user configuration
	I1212 22:02:27.578829   16410 start.go:298] selected driver: docker
	I1212 22:02:27.578836   16410 start.go:902] validating driver "docker" against <nil>
	I1212 22:02:27.578922   16410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:27.629470   16410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-12 22:02:27.621901289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:27.629633   16410 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 22:02:27.630143   16410 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1212 22:02:27.630298   16410 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 22:02:27.632442   16410 out.go:169] Using Docker driver with root privileges
	I1212 22:02:27.633932   16410 cni.go:84] Creating CNI manager for ""
	I1212 22:02:27.633955   16410 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:02:27.633969   16410 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 22:02:27.633983   16410 start_flags.go:323] config:
	{Name:download-only-479271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-479271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:27.635587   16410 out.go:97] Starting control plane node download-only-479271 in cluster download-only-479271
	I1212 22:02:27.635609   16410 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:02:27.636988   16410 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:02:27.637014   16410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:02:27.637113   16410 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:02:27.651818   16410 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:27.651953   16410 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 22:02:27.652037   16410 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:27.690331   16410 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:27.690353   16410 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:27.690504   16410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1212 22:02:27.692427   16410 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 22:02:27.692444   16410 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:27.727886   16410 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:31.485801   16410 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:31.485913   16410 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-479271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.248350755s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-479271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-479271: exit status 85 (70.526804ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:35.852633   16566 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:35.852918   16566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:35.852928   16566 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:35.852933   16566 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:35.853098   16566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	W1212 22:02:35.853215   16566 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: no such file or directory
	I1212 22:02:35.853632   16566 out.go:303] Setting JSON to true
	I1212 22:02:35.854422   16566 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2708,"bootTime":1702415848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:35.854478   16566 start.go:138] virtualization: kvm guest
	I1212 22:02:35.856753   16566 out.go:97] [download-only-479271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:35.858421   16566 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:35.856936   16566 notify.go:220] Checking for updates...
	I1212 22:02:35.861515   16566 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:35.863104   16566 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:02:35.864586   16566 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:02:35.865988   16566 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:35.868602   16566 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:35.869023   16566 config.go:182] Loaded profile config "download-only-479271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1212 22:02:35.869077   16566 start.go:810] api.Load failed for download-only-479271: filestore "download-only-479271": Docker machine "download-only-479271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:35.869150   16566 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:02:35.869181   16566 start.go:810] api.Load failed for download-only-479271: filestore "download-only-479271": Docker machine "download-only-479271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:35.889397   16566 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:02:35.889461   16566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:35.940190   16566 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-12 22:02:35.93172536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:35.940288   16566 docker.go:295] overlay module found
	I1212 22:02:35.942145   16566 out.go:97] Using the docker driver based on existing profile
	I1212 22:02:35.942174   16566 start.go:298] selected driver: docker
	I1212 22:02:35.942181   16566 start.go:902] validating driver "docker" against &{Name:download-only-479271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-479271 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:35.942457   16566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:35.989976   16566 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-12 22:02:35.981689375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:35.991268   16566 cni.go:84] Creating CNI manager for ""
	I1212 22:02:35.991293   16566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:02:35.991314   16566 start_flags.go:323] config:
	{Name:download-only-479271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-479271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1212 22:02:35.993461   16566 out.go:97] Starting control plane node download-only-479271 in cluster download-only-479271
	I1212 22:02:35.993478   16566 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:02:35.995056   16566 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:02:35.995084   16566 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:35.995178   16566 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:02:36.009741   16566 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:36.009836   16566 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 22:02:36.009851   16566 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 22:02:36.009855   16566 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 22:02:36.009864   16566 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 22:02:36.030208   16566 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:36.030227   16566 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:36.030343   16566 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1212 22:02:36.032177   16566 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 22:02:36.032192   16566 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:36.066578   16566 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:40.398211   16566 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:40.398297   16566 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-479271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (9.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-479271 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.517861835s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (9.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-479271
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-479271: exit status 85 (76.80188ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-479271 | jenkins | v1.32.0 | 12 Dec 23 22:02 UTC |          |
	|         | -p download-only-479271           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/12 22:02:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 22:02:42.181371   16717 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:02:42.181521   16717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:42.181531   16717 out.go:309] Setting ErrFile to fd 2...
	I1212 22:02:42.181536   16717 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:02:42.181733   16717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	W1212 22:02:42.181878   16717 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: open /home/jenkins/minikube-integration/17761-9643/.minikube/config/config.json: no such file or directory
	I1212 22:02:42.182372   16717 out.go:303] Setting JSON to true
	I1212 22:02:42.183226   16717 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2714,"bootTime":1702415848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:02:42.183290   16717 start.go:138] virtualization: kvm guest
	I1212 22:02:42.185353   16717 out.go:97] [download-only-479271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:02:42.187049   16717 out.go:169] MINIKUBE_LOCATION=17761
	I1212 22:02:42.185479   16717 notify.go:220] Checking for updates...
	I1212 22:02:42.190030   16717 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:02:42.191604   16717 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:02:42.192974   16717 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:02:42.194476   16717 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 22:02:42.197411   16717 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 22:02:42.197822   16717 config.go:182] Loaded profile config "download-only-479271": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1212 22:02:42.197881   16717 start.go:810] api.Load failed for download-only-479271: filestore "download-only-479271": Docker machine "download-only-479271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:42.197971   16717 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 22:02:42.198012   16717 start.go:810] api.Load failed for download-only-479271: filestore "download-only-479271": Docker machine "download-only-479271" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 22:02:42.220297   16717 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:02:42.220369   16717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:42.270236   16717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:42.262338115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:42.270329   16717 docker.go:295] overlay module found
	I1212 22:02:42.272180   16717 out.go:97] Using the docker driver based on existing profile
	I1212 22:02:42.272204   16717 start.go:298] selected driver: docker
	I1212 22:02:42.272209   16717 start.go:902] validating driver "docker" against &{Name:download-only-479271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-479271 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:02:42.272337   16717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:02:42.319929   16717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-12-12 22:02:42.312231391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:02:42.320605   16717 cni.go:84] Creating CNI manager for ""
	I1212 22:02:42.320624   16717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 22:02:42.320640   16717 start_flags.go:323] config:
	{Name:download-only-479271 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-479271 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1212 22:02:42.322829   16717 out.go:97] Starting control plane node download-only-479271 in cluster download-only-479271
	I1212 22:02:42.322850   16717 cache.go:121] Beginning downloading kic base image for docker with crio
	I1212 22:02:42.324318   16717 out.go:97] Pulling base image v0.0.42-1702394725-17761 ...
	I1212 22:02:42.324349   16717 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:02:42.324454   16717 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local docker daemon
	I1212 22:02:42.338423   16717 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 to local cache
	I1212 22:02:42.338526   16717 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory
	I1212 22:02:42.338540   16717 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 in local cache directory, skipping pull
	I1212 22:02:42.338544   16717 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 exists in cache, skipping pull
	I1212 22:02:42.338558   16717 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 as a tarball
	I1212 22:02:42.366523   16717 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:42.366550   16717 cache.go:56] Caching tarball of preloaded images
	I1212 22:02:42.366666   16717 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1212 22:02:42.368719   16717 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 22:02:42.368736   16717 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:42.400169   16717 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:4677ed63f210d912abc47b8c2f7401f7 -> /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I1212 22:02:47.108972   16717 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I1212 22:02:47.109054   16717 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17761-9643/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-479271"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-479271
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.27s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-042208 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-042208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-042208
--- PASS: TestDownloadOnlyKic (1.27s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-328563 --alsologtostderr --binary-mirror http://127.0.0.1:36103 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-328563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-328563
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (87.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-180311 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-180311 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.939807744s)
helpers_test.go:175: Cleaning up "offline-crio-180311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-180311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-180311: (2.462591269s)
--- PASS: TestOffline (87.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-818905
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-818905: exit status 85 (61.050191ms)

                                                
                                                
-- stdout --
	* Profile "addons-818905" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-818905"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-818905
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-818905: exit status 85 (59.499976ms)

                                                
                                                
-- stdout --
	* Profile "addons-818905" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-818905"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (130.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-818905 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-818905 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m10.473376159s)
--- PASS: TestAddons/Setup (130.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.986047ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5g6k8" [fc7ebc27-babc-48dc-928d-1b1782ea01ea] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01165728s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9wc4f" [957915c2-6516-426f-b900-6143af5f0982] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012162037s
addons_test.go:339: (dbg) Run:  kubectl --context addons-818905 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-818905 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-818905 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.635199405s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
2023/12/12 22:05:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-69w46" [958b0e98-58dc-4927-b160-73f08cb24579] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010649506s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-818905
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-818905: (5.617429276s)
--- PASS: TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.114432ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xt6xh" [37df71e4-7ba7-496c-b885-921e393df60e] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013950817s
addons_test.go:414: (dbg) Run:  kubectl --context addons-818905 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.23s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 10.517495ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-8vj4p" [1ba59c52-d351-4cfa-8c97-733b952603c2] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01196665s
addons_test.go:472: (dbg) Run:  kubectl --context addons-818905 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-818905 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.622792437s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.23s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 4.622154ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-818905 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-818905 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [16f10ff4-b19a-4d9f-aeaf-d22d03b44163] Pending
helpers_test.go:344: "task-pv-pod" [16f10ff4-b19a-4d9f-aeaf-d22d03b44163] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [16f10ff4-b19a-4d9f-aeaf-d22d03b44163] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.008217619s
addons_test.go:583: (dbg) Run:  kubectl --context addons-818905 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-818905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-818905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-818905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-818905 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-818905 delete pod task-pv-pod: (1.138263098s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-818905 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-818905 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-818905 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fb6d851f-601e-4570-94d6-e1680e9b2b99] Pending
helpers_test.go:344: "task-pv-pod-restore" [fb6d851f-601e-4570-94d6-e1680e9b2b99] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fb6d851f-601e-4570-94d6-e1680e9b2b99] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009801679s
addons_test.go:625: (dbg) Run:  kubectl --context addons-818905 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-818905 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-818905 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-818905 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.506702601s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-q5ss5" [730dad40-0b30-4d20-9969-04a173a3a0c4] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007362188s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-818905
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-818905 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-818905 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [95d3bd85-1462-4153-a34c-5cc36c0a7b75] Pending
helpers_test.go:344: "test-local-path" [95d3bd85-1462-4153-a34c-5cc36c0a7b75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [95d3bd85-1462-4153-a34c-5cc36c0a7b75] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [95d3bd85-1462-4153-a34c-5cc36c0a7b75] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.037172483s
addons_test.go:890: (dbg) Run:  kubectl --context addons-818905 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 ssh "cat /opt/local-path-provisioner/pvc-63257cc8-df89-4d4d-9324-970810f80368_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-818905 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-818905 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-818905 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-818905 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.135072429s)
--- PASS: TestAddons/parallel/LocalPath (55.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jc5wh" [061520bc-edd5-47af-9f5a-ba1bfb03e15e] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.020967491s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-818905
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-818905 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-818905 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-818905
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-818905: (11.884999829s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-818905
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-818905
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-818905
--- PASS: TestAddons/StoppedEnableDisable (12.16s)

                                                
                                    
x
+
TestCertOptions (30.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-164103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-164103 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.968837042s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-164103 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-164103 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-164103 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-164103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-164103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-164103: (3.925331274s)
--- PASS: TestCertOptions (30.51s)

                                                
                                    
x
+
TestCertExpiration (221.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-777409 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-777409 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.633512799s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-777409 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-777409 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.974887252s)
helpers_test.go:175: Cleaning up "cert-expiration-777409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-777409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-777409: (2.360308188s)
--- PASS: TestCertExpiration (221.97s)

                                                
                                    
x
+
TestForceSystemdFlag (29.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-744348 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-744348 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.375938012s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-744348 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-744348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-744348
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-744348: (2.341873504s)
--- PASS: TestForceSystemdFlag (29.98s)

                                                
                                    
x
+
TestForceSystemdEnv (29.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-313893 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1212 22:34:56.031408   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:35:04.711009   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-313893 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.484430008s)
helpers_test.go:175: Cleaning up "force-systemd-env-313893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-313893
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-313893: (2.353194281s)
--- PASS: TestForceSystemdEnv (29.84s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.92s)

                                                
                                    
x
+
TestErrorSpam/setup (23.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-390864 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-390864 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-390864 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-390864 --driver=docker  --container-runtime=crio: (23.647105031s)
--- PASS: TestErrorSpam/setup (23.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 stop: (1.190879232s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-390864 --log_dir /tmp/nospam-390864 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17761-9643/.minikube/files/etc/test/nested/copy/16399/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-355715 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.901960688s)
--- PASS: TestFunctional/serial/StartWithProxy (69.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --alsologtostderr -v=8
E1212 22:10:04.710864   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:04.716611   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:04.726916   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:04.747162   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:04.787426   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:04.867742   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:05.028190   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:05.348753   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:05.989759   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:07.270301   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:09.831250   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:14.952072   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:10:25.192251   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-355715 --alsologtostderr -v=8: (35.301315834s)
functional_test.go:659: soft start took 35.30201406s for "functional-355715" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-355715 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 cache add registry.k8s.io/pause:3.3: (1.062395899s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-355715 /tmp/TestFunctionalserialCacheCmdcacheadd_local114750743/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache add minikube-local-cache-test:functional-355715
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache delete minikube-local-cache-test:functional-355715
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-355715
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.558347ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 kubectl -- --context functional-355715 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-355715 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 22:10:45.672801   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-355715 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.753355135s)
functional_test.go:757: restart took 32.753487838s for "functional-355715" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-355715 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 logs: (1.307596674s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 logs --file /tmp/TestFunctionalserialLogsFileCmd1517348525/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 logs --file /tmp/TestFunctionalserialLogsFileCmd1517348525/001/logs.txt: (1.329385246s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-355715 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-355715
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-355715: exit status 115 (324.840132ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32141 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-355715 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 config get cpus: exit status 14 (90.324628ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 config get cpus: exit status 14 (77.612871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-355715 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-355715 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51457: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-355715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.159595ms)

                                                
                                                
-- stdout --
	* [functional-355715] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:11:37.673682   49270 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:11:37.673822   49270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:11:37.673832   49270 out.go:309] Setting ErrFile to fd 2...
	I1212 22:11:37.673839   49270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:11:37.674068   49270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:11:37.674612   49270 out.go:303] Setting JSON to false
	I1212 22:11:37.675715   49270 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3250,"bootTime":1702415848,"procs":410,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:11:37.675775   49270 start.go:138] virtualization: kvm guest
	I1212 22:11:37.678839   49270 out.go:177] * [functional-355715] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:11:37.680587   49270 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:11:37.682018   49270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:11:37.680661   49270 notify.go:220] Checking for updates...
	I1212 22:11:37.683682   49270 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:11:37.685249   49270 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:11:37.686696   49270 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:11:37.688164   49270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:11:37.689940   49270 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:11:37.690485   49270 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:11:37.712555   49270 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:11:37.712681   49270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:11:37.765155   49270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-12-12 22:11:37.756369997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:11:37.765268   49270 docker.go:295] overlay module found
	I1212 22:11:37.767338   49270 out.go:177] * Using the docker driver based on existing profile
	I1212 22:11:37.768800   49270 start.go:298] selected driver: docker
	I1212 22:11:37.768828   49270 start.go:902] validating driver "docker" against &{Name:functional-355715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-355715 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:11:37.768975   49270 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:11:37.771467   49270 out.go:177] 
	W1212 22:11:37.772965   49270 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 22:11:37.774367   49270 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-355715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-355715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.427725ms)

                                                
                                                
-- stdout --
	* [functional-355715] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:11:38.276630   49669 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:11:38.276764   49669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:11:38.276774   49669 out.go:309] Setting ErrFile to fd 2...
	I1212 22:11:38.276778   49669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:11:38.277060   49669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:11:38.277567   49669 out.go:303] Setting JSON to false
	I1212 22:11:38.278608   49669 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3250,"bootTime":1702415848,"procs":410,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:11:38.278675   49669 start.go:138] virtualization: kvm guest
	I1212 22:11:38.281190   49669 out.go:177] * [functional-355715] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1212 22:11:38.282723   49669 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:11:38.282729   49669 notify.go:220] Checking for updates...
	I1212 22:11:38.284320   49669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:11:38.285695   49669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:11:38.287293   49669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:11:38.288817   49669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:11:38.290297   49669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:11:38.292226   49669 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:11:38.292959   49669 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:11:38.318539   49669 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:11:38.318646   49669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:11:38.372322   49669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-12-12 22:11:38.362463742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:11:38.372432   49669 docker.go:295] overlay module found
	I1212 22:11:38.374481   49669 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1212 22:11:38.376123   49669 start.go:298] selected driver: docker
	I1212 22:11:38.376138   49669 start.go:902] validating driver "docker" against &{Name:functional-355715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702394725-17761@sha256:d54bd539f5344c99d05c8897079f380df5ad03e2c1323e08060f77c1f95b4517 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-355715 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 22:11:38.376227   49669 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:11:38.378465   49669 out.go:177] 
	W1212 22:11:38.379910   49669 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 22:11:38.381386   49669 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-355715 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-355715 expose deployment hello-node-connect --type=NodePort --port=8080
E1212 22:11:26.634028   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-w7qnz" [23e2b758-b4c8-4b7b-96ed-c369ec28ebe7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-w7qnz" [23e2b758-b4c8-4b7b-96ed-c369ec28ebe7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.009427057s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31881
functional_test.go:1674: http://192.168.49.2:31881: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-w7qnz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31881
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3667cbde-b62d-4540-ab14-f8ddba75ec08] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011399617s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-355715 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-355715 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-355715 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355715 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92bf3924-6881-464c-b006-35a618917df0] Pending
helpers_test.go:344: "sp-pod" [92bf3924-6881-464c-b006-35a618917df0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92bf3924-6881-464c-b006-35a618917df0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.021755351s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-355715 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-355715 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355715 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [94e95aae-f796-4152-9b64-edb7ec17ec25] Pending
helpers_test.go:344: "sp-pod" [94e95aae-f796-4152-9b64-edb7ec17ec25] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [94e95aae-f796-4152-9b64-edb7ec17ec25] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.014795271s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-355715 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh -n functional-355715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cp functional-355715:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3828617095/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh -n functional-355715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh -n functional-355715 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-355715 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-cpcwf" [9ed9653c-6ef8-402e-b323-ab25f474b92d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-cpcwf" [9ed9653c-6ef8-402e-b323-ab25f474b92d] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.056048978s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-355715 exec mysql-859648c796-cpcwf -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-355715 exec mysql-859648c796-cpcwf -- mysql -ppassword -e "show databases;": exit status 1 (113.972106ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-355715 exec mysql-859648c796-cpcwf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/16399/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /etc/test/nested/copy/16399/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/16399.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /etc/ssl/certs/16399.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/16399.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /usr/share/ca-certificates/16399.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/163992.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /etc/ssl/certs/163992.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/163992.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /usr/share/ca-certificates/163992.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-355715 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "sudo systemctl is-active docker": exit status 1 (261.584139ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "sudo systemctl is-active containerd": exit status 1 (254.341374ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-355715 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-355715 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bxrhh" [6635b55a-8c6b-46c6-aa14-407add42e546] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bxrhh" [6635b55a-8c6b-46c6-aa14-407add42e546] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.017692615s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 47046: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-355715 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6c5e0b92-432a-4a9b-a80c-07a3a7687269] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6c5e0b92-432a-4a9b-a80c-07a3a7687269] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.082729801s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-355715 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-355715
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-355715 image ls --format short --alsologtostderr:
I1212 22:11:54.846801   54216 out.go:296] Setting OutFile to fd 1 ...
I1212 22:11:54.847175   54216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:54.847206   54216 out.go:309] Setting ErrFile to fd 2...
I1212 22:11:54.847227   54216 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:54.847527   54216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
I1212 22:11:54.848517   54216 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:54.848746   54216 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:54.849373   54216 cli_runner.go:164] Run: docker container inspect functional-355715 --format={{.State.Status}}
I1212 22:11:54.869741   54216 ssh_runner.go:195] Run: systemctl --version
I1212 22:11:54.869782   54216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355715
I1212 22:11:54.886557   54216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/functional-355715/id_rsa Username:docker}
I1212 22:11:55.121264   54216 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-355715 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-355715  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | alpine             | 01e5c69afaf63 | 44.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-355715 image ls --format table --alsologtostderr:
I1212 22:11:55.406009   54427 out.go:296] Setting OutFile to fd 1 ...
I1212 22:11:55.406249   54427 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.406257   54427 out.go:309] Setting ErrFile to fd 2...
I1212 22:11:55.406262   54427 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.406430   54427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
I1212 22:11:55.406971   54427 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.407063   54427 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.407448   54427 cli_runner.go:164] Run: docker container inspect functional-355715 --format={{.State.Status}}
I1212 22:11:55.431929   54427 ssh_runner.go:195] Run: systemctl --version
I1212 22:11:55.431988   54427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355715
I1212 22:11:55.452834   54427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/functional-355715/id_rsa Username:docker}
I1212 22:11:55.719832   54427 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-355715 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103
315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"
id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866"
,"repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":["d
ocker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc","docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44421929"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-355715"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","rep
oDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/
dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-355715 image ls --format json --alsologtostderr:
I1212 22:11:55.305546   54382 out.go:296] Setting OutFile to fd 1 ...
I1212 22:11:55.305698   54382 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.305708   54382 out.go:309] Setting ErrFile to fd 2...
I1212 22:11:55.305713   54382 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.305933   54382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
I1212 22:11:55.306517   54382 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.306613   54382 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.307057   54382 cli_runner.go:164] Run: docker container inspect functional-355715 --format={{.State.Status}}
I1212 22:11:55.329955   54382 ssh_runner.go:195] Run: systemctl --version
I1212 22:11:55.330006   54382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355715
I1212 22:11:55.355664   54382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/functional-355715/id_rsa Username:docker}
I1212 22:11:55.520729   54382 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-355715 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-355715
size: "34114467"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests:
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
- docker.io/library/nginx@sha256:558b1480dc5c8f4373601a641c56b4fd24a77105d1246bd80b991f8b5c5dc0fc
repoTags:
- docker.io/library/nginx:alpine
size: "44421929"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-355715 image ls --format yaml --alsologtostderr:
I1212 22:11:54.879927   54218 out.go:296] Setting OutFile to fd 1 ...
I1212 22:11:54.880068   54218 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:54.880078   54218 out.go:309] Setting ErrFile to fd 2...
I1212 22:11:54.880085   54218 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:54.880672   54218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
I1212 22:11:54.882003   54218 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:54.882119   54218 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:54.882605   54218 cli_runner.go:164] Run: docker container inspect functional-355715 --format={{.State.Status}}
I1212 22:11:54.900237   54218 ssh_runner.go:195] Run: systemctl --version
I1212 22:11:54.900281   54218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355715
I1212 22:11:54.927664   54218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/functional-355715/id_rsa Username:docker}
I1212 22:11:55.122404   54218 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh pgrep buildkitd: exit status 1 (458.298902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image build -t localhost/my-image:functional-355715 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image build -t localhost/my-image:functional-355715 testdata/build --alsologtostderr: (6.542141641s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-355715 image build -t localhost/my-image:functional-355715 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8166b4c7e11
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-355715
--> 447e59d66b3
Successfully tagged localhost/my-image:functional-355715
447e59d66b3df147dd65a9bdb15d556c354e587c8ade3d4279282c3241c4137c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-355715 image build -t localhost/my-image:functional-355715 testdata/build --alsologtostderr:
I1212 22:11:55.292174   54373 out.go:296] Setting OutFile to fd 1 ...
I1212 22:11:55.292340   54373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.292350   54373 out.go:309] Setting ErrFile to fd 2...
I1212 22:11:55.292355   54373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 22:11:55.292546   54373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
I1212 22:11:55.293109   54373 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.293603   54373 config.go:182] Loaded profile config "functional-355715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1212 22:11:55.294009   54373 cli_runner.go:164] Run: docker container inspect functional-355715 --format={{.State.Status}}
I1212 22:11:55.310048   54373 ssh_runner.go:195] Run: systemctl --version
I1212 22:11:55.310098   54373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355715
I1212 22:11:55.337751   54373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/functional-355715/id_rsa Username:docker}
I1212 22:11:55.464007   54373 build_images.go:151] Building image from path: /tmp/build.2527572632.tar
I1212 22:11:55.464061   54373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 22:11:55.523465   54373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2527572632.tar
I1212 22:11:55.527172   54373 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2527572632.tar: stat -c "%s %y" /var/lib/minikube/build/build.2527572632.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2527572632.tar': No such file or directory
I1212 22:11:55.527207   54373 ssh_runner.go:362] scp /tmp/build.2527572632.tar --> /var/lib/minikube/build/build.2527572632.tar (3072 bytes)
I1212 22:11:55.625543   54373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2527572632
I1212 22:11:55.637887   54373 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2527572632 -xf /var/lib/minikube/build/build.2527572632.tar
I1212 22:11:55.716075   54373 crio.go:297] Building image: /var/lib/minikube/build/build.2527572632
I1212 22:11:55.716148   54373 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-355715 /var/lib/minikube/build/build.2527572632 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 22:12:01.750728   54373 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-355715 /var/lib/minikube/build/build.2527572632 --cgroup-manager=cgroupfs: (6.034555673s)
I1212 22:12:01.750788   54373 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2527572632
I1212 22:12:01.758749   54373 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2527572632.tar
I1212 22:12:01.766246   54373 build_images.go:207] Built localhost/my-image:functional-355715 from /tmp/build.2527572632.tar
I1212 22:12:01.766279   54373 build_images.go:123] succeeded building to: functional-355715
I1212 22:12:01.766283   54373 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-355715
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service list -o json
functional_test.go:1493: Took "551.846018ms" to run "out/minikube-linux-amd64 -p functional-355715 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr: (4.560305103s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31689
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31689
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-355715 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.147.195 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-355715 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdany-port2999333618/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702419098388613992" to /tmp/TestFunctionalparallelMountCmdany-port2999333618/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702419098388613992" to /tmp/TestFunctionalparallelMountCmdany-port2999333618/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702419098388613992" to /tmp/TestFunctionalparallelMountCmdany-port2999333618/001/test-1702419098388613992
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.891122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 22:11 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 22:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 22:11 test-1702419098388613992
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh cat /mount-9p/test-1702419098388613992
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-355715 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5b578c18-ae2e-4362-9acd-eda56c15c154] Pending
helpers_test.go:344: "busybox-mount" [5b578c18-ae2e-4362-9acd-eda56c15c154] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5b578c18-ae2e-4362-9acd-eda56c15c154] Running
helpers_test.go:344: "busybox-mount" [5b578c18-ae2e-4362-9acd-eda56c15c154] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5b578c18-ae2e-4362-9acd-eda56c15c154] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.041365894s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-355715 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdany-port2999333618/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "335.839779ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "67.430458ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "298.914274ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "74.710712ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr: (2.790641476s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-355715
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image load --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr: (3.914252616s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdspecific-port531763330/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.958708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdspecific-port531763330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "sudo umount -f /mount-9p": exit status 1 (298.382643ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-355715 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdspecific-port531763330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image save gcr.io/google-containers/addon-resizer:functional-355715 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image save gcr.io/google-containers/addon-resizer:functional-355715 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.347482953s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T" /mount1: exit status 1 (478.646955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-355715 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-355715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2020553765/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image rm gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image rm gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr: (1.916368364s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
2023/12/12 22:11:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.02053479s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-355715
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-355715 image save --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-355715 image save --daemon gcr.io/google-containers/addon-resizer:functional-355715 --alsologtostderr: (2.048479406s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-355715
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.08s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-355715
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-355715
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-355715
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (67.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-036387 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1212 22:12:48.555121   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-036387 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m7.611944645s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (67.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.78s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons enable ingress --alsologtostderr -v=5: (10.775863189s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.78s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-036387 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-636490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1212 22:16:34.542656   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:16:44.783314   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:17:05.263693   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-636490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (48.904975832s)
--- PASS: TestJSONOutput/start/Command (48.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-636490 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-636490 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-636490 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-636490 --output=json --user=testUser: (5.763607766s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-947474 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-947474 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.142606ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9de617dc-8609-478a-9e79-6c4dd18b08d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-947474] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"96bf0d6e-2c81-4bfb-9acf-05b86233cf6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"35fb9e5e-9afb-4271-b2c3-a94b2b2e5488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d8976776-6a30-4aa9-9754-8fa8c9a51924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig"}}
	{"specversion":"1.0","id":"6d2a46f4-272b-4c27-97ea-5e519074cfa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube"}}
	{"specversion":"1.0","id":"d26dd4d2-88d1-4b5d-a45a-16f31b9ac8f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b8e2ae66-b2da-40f3-8473-d9916eb52b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a40027a-7229-43ca-abed-495c45e02c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-947474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-947474
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-991132 --network=
E1212 22:17:46.224517   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-991132 --network=: (33.25463902s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-991132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-991132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-991132: (1.653261084s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-300644 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-300644 --network=bridge: (22.179130387s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-300644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-300644
E1212 22:18:32.987492   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:32.992729   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.002960   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.023259   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.063571   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.143882   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.304269   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:33.624726   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-300644: (1.849854901s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.05s)

                                                
                                    
x
+
TestKicExistingNetwork (26.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-843134 --network=existing-network
E1212 22:18:34.265184   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:35.545841   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:38.106717   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:43.227251   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:18:53.468273   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-843134 --network=existing-network: (24.669450535s)
helpers_test.go:175: Cleaning up "existing-network-843134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-843134
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-843134: (1.966050629s)
--- PASS: TestKicExistingNetwork (26.76s)

                                                
                                    
x
+
TestKicCustomSubnet (26.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-063053 --subnet=192.168.60.0/24
E1212 22:19:08.144736   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:19:13.948825   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-063053 --subnet=192.168.60.0/24: (24.938327482s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-063053 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-063053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-063053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-063053: (1.993979384s)
--- PASS: TestKicCustomSubnet (26.95s)

                                                
                                    
x
+
TestKicStaticIP (27.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-597020 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-597020 --static-ip=192.168.200.200: (24.908462791s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-597020 ip
helpers_test.go:175: Cleaning up "static-ip-597020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-597020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-597020: (2.026533425s)
--- PASS: TestKicStaticIP (27.07s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (53.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-906769 --driver=docker  --container-runtime=crio
E1212 22:19:54.909297   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:20:04.710919   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-906769 --driver=docker  --container-runtime=crio: (24.007952728s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-909256 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-909256 --driver=docker  --container-runtime=crio: (23.966378983s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-906769
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-909256
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-909256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-909256
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-909256: (1.848368601s)
helpers_test.go:175: Cleaning up "first-906769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-906769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-906769: (2.2019651s)
--- PASS: TestMinikubeProfile (53.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-962610 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-962610 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.272602334s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-962610 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-980625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-980625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.889325365s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-980625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-962610 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-962610 --alsologtostderr -v=5: (1.600728453s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-980625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-980625
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-980625: (1.193886888s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-980625
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-980625: (5.845289604s)
--- PASS: TestMountStart/serial/RestartStopped (6.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-980625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-764961 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1212 22:21:16.829494   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:21:24.301985   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:21:51.984907   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-764961 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.391250568s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-764961 -- rollout status deployment/busybox: (2.369265572s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-67rxw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-764961 -- exec busybox-5bc68d56bd-bbwmj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-764961 -v 3 --alsologtostderr
E1212 22:23:32.987730   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
E1212 22:24:00.670245   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-764961 -v 3 --alsologtostderr: (45.213265519s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-764961 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp testdata/cp-test.txt multinode-764961:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2901931105/001/cp-test_multinode-764961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961:/home/docker/cp-test.txt multinode-764961-m02:/home/docker/cp-test_multinode-764961_multinode-764961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test_multinode-764961_multinode-764961-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961:/home/docker/cp-test.txt multinode-764961-m03:/home/docker/cp-test_multinode-764961_multinode-764961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test_multinode-764961_multinode-764961-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp testdata/cp-test.txt multinode-764961-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2901931105/001/cp-test_multinode-764961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m02:/home/docker/cp-test.txt multinode-764961:/home/docker/cp-test_multinode-764961-m02_multinode-764961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test_multinode-764961-m02_multinode-764961.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m02:/home/docker/cp-test.txt multinode-764961-m03:/home/docker/cp-test_multinode-764961-m02_multinode-764961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test_multinode-764961-m02_multinode-764961-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp testdata/cp-test.txt multinode-764961-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2901931105/001/cp-test_multinode-764961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m03:/home/docker/cp-test.txt multinode-764961:/home/docker/cp-test_multinode-764961-m03_multinode-764961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961 "sudo cat /home/docker/cp-test_multinode-764961-m03_multinode-764961.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 cp multinode-764961-m03:/home/docker/cp-test.txt multinode-764961-m02:/home/docker/cp-test_multinode-764961-m03_multinode-764961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 ssh -n multinode-764961-m02 "sudo cat /home/docker/cp-test_multinode-764961-m03_multinode-764961-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-764961 node stop m03: (1.198144316s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-764961 status: exit status 7 (444.326767ms)

                                                
                                                
-- stdout --
	multinode-764961
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-764961-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-764961-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr: exit status 7 (444.655799ms)

                                                
                                                
-- stdout --
	multinode-764961
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-764961-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-764961-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:24:18.176164  115039 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:24:18.176299  115039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:24:18.176308  115039 out.go:309] Setting ErrFile to fd 2...
	I1212 22:24:18.176313  115039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:24:18.176508  115039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:24:18.176672  115039 out.go:303] Setting JSON to false
	I1212 22:24:18.176701  115039 mustload.go:65] Loading cluster: multinode-764961
	I1212 22:24:18.176735  115039 notify.go:220] Checking for updates...
	I1212 22:24:18.177085  115039 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:24:18.177097  115039 status.go:255] checking status of multinode-764961 ...
	I1212 22:24:18.177514  115039 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:24:18.194069  115039 status.go:330] multinode-764961 host status = "Running" (err=<nil>)
	I1212 22:24:18.194090  115039 host.go:66] Checking if "multinode-764961" exists ...
	I1212 22:24:18.194331  115039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961
	I1212 22:24:18.211070  115039 host.go:66] Checking if "multinode-764961" exists ...
	I1212 22:24:18.211317  115039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:24:18.211371  115039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961
	I1212 22:24:18.227256  115039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961/id_rsa Username:docker}
	I1212 22:24:18.312435  115039 ssh_runner.go:195] Run: systemctl --version
	I1212 22:24:18.316027  115039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:24:18.325632  115039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:24:18.377951  115039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:56 SystemTime:2023-12-12 22:24:18.369163776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:24:18.378650  115039 kubeconfig.go:92] found "multinode-764961" server: "https://192.168.58.2:8443"
	I1212 22:24:18.378673  115039 api_server.go:166] Checking apiserver status ...
	I1212 22:24:18.378702  115039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 22:24:18.388434  115039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I1212 22:24:18.396300  115039 api_server.go:182] apiserver freezer: "13:freezer:/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio/crio-d7794751baba85bed959af44653060222b7cf4955144b17ebcfd123f0fd5a2bc"
	I1212 22:24:18.396358  115039 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c9ae967ebe8580b13a0719d6e4dcdd0964986c2bbf10a37e5e45167a60b5768b/crio/crio-d7794751baba85bed959af44653060222b7cf4955144b17ebcfd123f0fd5a2bc/freezer.state
	I1212 22:24:18.403712  115039 api_server.go:204] freezer state: "THAWED"
	I1212 22:24:18.403748  115039 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1212 22:24:18.407740  115039 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1212 22:24:18.407759  115039 status.go:421] multinode-764961 apiserver status = Running (err=<nil>)
	I1212 22:24:18.407767  115039 status.go:257] multinode-764961 status: &{Name:multinode-764961 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 22:24:18.407784  115039 status.go:255] checking status of multinode-764961-m02 ...
	I1212 22:24:18.407997  115039 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Status}}
	I1212 22:24:18.423622  115039 status.go:330] multinode-764961-m02 host status = "Running" (err=<nil>)
	I1212 22:24:18.423639  115039 host.go:66] Checking if "multinode-764961-m02" exists ...
	I1212 22:24:18.423853  115039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-764961-m02
	I1212 22:24:18.439044  115039 host.go:66] Checking if "multinode-764961-m02" exists ...
	I1212 22:24:18.439305  115039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 22:24:18.439342  115039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-764961-m02
	I1212 22:24:18.455089  115039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17761-9643/.minikube/machines/multinode-764961-m02/id_rsa Username:docker}
	I1212 22:24:18.539985  115039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 22:24:18.549759  115039 status.go:257] multinode-764961-m02 status: &{Name:multinode-764961-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 22:24:18.549796  115039 status.go:255] checking status of multinode-764961-m03 ...
	I1212 22:24:18.550019  115039 cli_runner.go:164] Run: docker container inspect multinode-764961-m03 --format={{.State.Status}}
	I1212 22:24:18.565791  115039 status.go:330] multinode-764961-m03 host status = "Stopped" (err=<nil>)
	I1212 22:24:18.565812  115039 status.go:343] host is not running, skipping remaining checks
	I1212 22:24:18.565824  115039 status.go:257] multinode-764961-m03 status: &{Name:multinode-764961-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-764961 node start m03 --alsologtostderr: (9.939841157s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-764961
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-764961
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-764961: (24.726055374s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-764961 --wait=true -v=8 --alsologtostderr
E1212 22:25:04.710931   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-764961 --wait=true -v=8 --alsologtostderr: (1m26.961973834s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-764961
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 node delete m03
E1212 22:26:24.302853   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-764961 node delete m03: (4.067661801s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 stop
E1212 22:26:27.757145   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-764961 stop: (23.649010798s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-764961 status: exit status 7 (91.364907ms)

                                                
                                                
-- stdout --
	multinode-764961
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-764961-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr: exit status 7 (95.320453ms)

                                                
                                                
-- stdout --
	multinode-764961
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-764961-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:26:49.392271  125385 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:26:49.392531  125385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:26:49.392540  125385 out.go:309] Setting ErrFile to fd 2...
	I1212 22:26:49.392545  125385 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:26:49.392718  125385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:26:49.392873  125385 out.go:303] Setting JSON to false
	I1212 22:26:49.392903  125385 mustload.go:65] Loading cluster: multinode-764961
	I1212 22:26:49.392953  125385 notify.go:220] Checking for updates...
	I1212 22:26:49.393294  125385 config.go:182] Loaded profile config "multinode-764961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:26:49.393306  125385 status.go:255] checking status of multinode-764961 ...
	I1212 22:26:49.393712  125385 cli_runner.go:164] Run: docker container inspect multinode-764961 --format={{.State.Status}}
	I1212 22:26:49.414852  125385 status.go:330] multinode-764961 host status = "Stopped" (err=<nil>)
	I1212 22:26:49.414902  125385 status.go:343] host is not running, skipping remaining checks
	I1212 22:26:49.414917  125385 status.go:257] multinode-764961 status: &{Name:multinode-764961 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 22:26:49.414946  125385 status.go:255] checking status of multinode-764961-m02 ...
	I1212 22:26:49.415193  125385 cli_runner.go:164] Run: docker container inspect multinode-764961-m02 --format={{.State.Status}}
	I1212 22:26:49.430830  125385 status.go:330] multinode-764961-m02 host status = "Stopped" (err=<nil>)
	I1212 22:26:49.430850  125385 status.go:343] host is not running, skipping remaining checks
	I1212 22:26:49.430856  125385 status.go:257] multinode-764961-m02 status: &{Name:multinode-764961-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-764961 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-764961 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m16.7773129s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-764961 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-764961
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-764961-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-764961-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.673313ms)

                                                
                                                
-- stdout --
	* [multinode-764961-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-764961-m02' is duplicated with machine name 'multinode-764961-m02' in profile 'multinode-764961'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-764961-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-764961-m03 --driver=docker  --container-runtime=crio: (20.352802593s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-764961
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-764961: exit status 80 (263.716247ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-764961
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-764961-m03 already exists in multinode-764961-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-764961-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-764961-m03: (1.866035776s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.62s)

                                                
                                    
x
+
TestPreload (123.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-924498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-924498 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m8.873041735s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-924498 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-924498
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-924498: (5.661306763s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-924498 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1212 22:30:04.710974   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-924498 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.042933549s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-924498 image list
helpers_test.go:175: Cleaning up "test-preload-924498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-924498
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-924498: (2.2515627s)
--- PASS: TestPreload (123.80s)

                                                
                                    
x
+
TestScheduledStopUnix (96.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-539626 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-539626 --memory=2048 --driver=docker  --container-runtime=crio: (20.97560506s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539626 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-539626 -n scheduled-stop-539626
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539626 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539626 --cancel-scheduled
E1212 22:31:24.302587   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539626 -n scheduled-stop-539626
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-539626
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539626 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-539626
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-539626: exit status 7 (76.29835ms)

                                                
                                                
-- stdout --
	scheduled-stop-539626
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539626 -n scheduled-stop-539626
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539626 -n scheduled-stop-539626: exit status 7 (72.665499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-539626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-539626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-539626: (4.444783526s)
--- PASS: TestScheduledStopUnix (96.80s)

                                                
                                    
x
+
TestInsufficientStorage (13.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-678271 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-678271 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.728854996s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a8b34ca-b758-4cea-baaa-f0824aed1c6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-678271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8acc707-e690-4037-ac2a-f213f3cd513d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17761"}}
	{"specversion":"1.0","id":"063577f9-8a96-4082-a60c-9887173103cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5663ffc4-08ba-434d-ace2-a0e025e0f572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig"}}
	{"specversion":"1.0","id":"ed10bd04-60c3-4489-a0e9-10adb775ada7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube"}}
	{"specversion":"1.0","id":"6998b09a-cc1a-4f2e-9edf-9c4dfd48d2fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"370eac0f-a393-43eb-b684-4fd2b73b4523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0970d78-1b7a-4a9d-9c71-94f22eef0999","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0a126d11-257e-49f6-b017-5acccee6d025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c05b4ce0-e129-43df-b8a2-b29b16c50ccc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"af6600ea-92e8-44ed-9e5a-fbc6ac69eb2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0f37b80f-6ce8-4948-a7ec-955bf1d799b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-678271 in cluster insufficient-storage-678271","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3824399a-8596-4331-a533-7941812dbd96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702394725-17761 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"73ee22dd-978f-4126-a434-385f33601000","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"80fc5173-6a24-4935-ab6a-ec731762a908","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-678271 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-678271 --output=json --layout=cluster: exit status 7 (262.961893ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-678271","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-678271","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:32:26.497910  146871 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-678271" does not appear in /home/jenkins/minikube-integration/17761-9643/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-678271 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-678271 --output=json --layout=cluster: exit status 7 (266.072132ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-678271","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-678271","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 22:32:26.764652  146958 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-678271" does not appear in /home/jenkins/minikube-integration/17761-9643/kubeconfig
	E1212 22:32:26.773733  146958 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/insufficient-storage-678271/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-678271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-678271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-678271: (1.825439817s)
--- PASS: TestInsufficientStorage (13.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.875939352s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-856114
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-856114: (2.04001231s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-856114 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-856114 status --format={{.Host}}: exit status 7 (95.771038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.984215861s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-856114 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (88.087255ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-856114] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-856114
	    minikube start -p kubernetes-upgrade-856114 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8561142 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-856114 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-856114 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.967671625s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-856114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-856114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-856114: (2.19229628s)
--- PASS: TestKubernetesUpgrade (349.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.3260124933.exe start -p missing-upgrade-206144 --memory=2200 --driver=docker  --container-runtime=crio
E1212 22:32:47.345182   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.3260124933.exe start -p missing-upgrade-206144 --memory=2200 --driver=docker  --container-runtime=crio: (1m20.506196838s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-206144
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-206144: (2.082932993s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-206144
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-206144 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-206144 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m22.706005356s)
helpers_test.go:175: Cleaning up "missing-upgrade-206144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-206144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-206144: (2.374079698s)
--- PASS: TestMissingContainerUpgrade (168.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.624091ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-203266] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-203266 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-203266 --driver=docker  --container-runtime=crio: (36.9600473s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-203266 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --driver=docker  --container-runtime=crio: (7.639367186s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-203266 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-203266 status -o json: exit status 2 (361.729039ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-203266","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-203266
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-203266: (2.108698773s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-203266 --no-kubernetes --driver=docker  --container-runtime=crio: (6.477881323s)
--- PASS: TestNoKubernetes/serial/Start (6.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-203266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-203266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.369207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-203266
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-203266: (1.239415313s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-203266 --driver=docker  --container-runtime=crio
E1212 22:33:32.986814   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-203266 --driver=docker  --container-runtime=crio: (9.289891707s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-203266 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-203266 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.081019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-323945
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                    
x
+
TestPause/serial/Start (79.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-826077 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-826077 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.743435326s)
--- PASS: TestPause/serial/Start (79.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-797711 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-797711 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (157.855935ms)

                                                
                                                
-- stdout --
	* [false-797711] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 22:35:20.457966  191490 out.go:296] Setting OutFile to fd 1 ...
	I1212 22:35:20.458243  191490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:35:20.458253  191490 out.go:309] Setting ErrFile to fd 2...
	I1212 22:35:20.458258  191490 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 22:35:20.458452  191490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17761-9643/.minikube/bin
	I1212 22:35:20.459000  191490 out.go:303] Setting JSON to false
	I1212 22:35:20.460293  191490 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4673,"bootTime":1702415848,"procs":513,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 22:35:20.460357  191490 start.go:138] virtualization: kvm guest
	I1212 22:35:20.462806  191490 out.go:177] * [false-797711] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 22:35:20.464449  191490 out.go:177]   - MINIKUBE_LOCATION=17761
	I1212 22:35:20.465825  191490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 22:35:20.464490  191490 notify.go:220] Checking for updates...
	I1212 22:35:20.468601  191490 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17761-9643/kubeconfig
	I1212 22:35:20.470081  191490 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17761-9643/.minikube
	I1212 22:35:20.471454  191490 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 22:35:20.472853  191490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 22:35:20.474813  191490 config.go:182] Loaded profile config "force-systemd-env-313893": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:20.474914  191490 config.go:182] Loaded profile config "kubernetes-upgrade-856114": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I1212 22:35:20.475001  191490 config.go:182] Loaded profile config "pause-826077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1212 22:35:20.475072  191490 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 22:35:20.496830  191490 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1212 22:35:20.496971  191490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 22:35:20.548737  191490 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:66 SystemTime:2023-12-12 22:35:20.539666026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648050176 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<ni
l> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1212 22:35:20.548838  191490 docker.go:295] overlay module found
	I1212 22:35:20.550701  191490 out.go:177] * Using the docker driver based on user configuration
	I1212 22:35:20.552130  191490 start.go:298] selected driver: docker
	I1212 22:35:20.552141  191490 start.go:902] validating driver "docker" against <nil>
	I1212 22:35:20.552150  191490 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 22:35:20.554223  191490 out.go:177] 
	W1212 22:35:20.555583  191490 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 22:35:20.556969  191490 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-797711 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:35:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-856114
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-826077
contexts:
- context:
cluster: kubernetes-upgrade-856114
user: kubernetes-upgrade-856114
name: kubernetes-upgrade-856114
- context:
cluster: pause-826077
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-826077
name: pause-826077
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-856114
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.key
- name: pause-826077
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-797711

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-797711"

                                                
                                                
----------------------- debugLogs end: false-797711 [took: 3.210027169s] --------------------------------
helpers_test.go:175: Cleaning up "false-797711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-797711
--- PASS: TestNetworkPlugins/group/false (3.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-826077 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-826077 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.680925418s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-826077 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-826077 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-826077 --output=json --layout=cluster: exit status 2 (342.141831ms)

                                                
                                                
-- stdout --
	{"Name":"pause-826077","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-826077","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-826077 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-826077 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-826077 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-826077 --alsologtostderr -v=5: (4.210241209s)
--- PASS: TestPause/serial/DeletePaused (4.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-826077
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-826077: exit status 1 (20.714161ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-826077: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-818613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1212 22:36:24.302160   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-818613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m9.764251086s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-544020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-544020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m7.582406999s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-544020 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f19fcbe-dc59-4b14-b9d1-7a7498620853] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f19fcbe-dc59-4b14-b9d1-7a7498620853] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.014936065s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-544020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-544020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-544020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-544020 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-544020 --alsologtostderr -v=3: (11.902841093s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-544020 -n no-preload-544020
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-544020 -n no-preload-544020: exit status 7 (83.899091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-544020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-544020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-544020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m39.521899706s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-544020 -n no-preload-544020
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-818613 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de26c086-06b7-4161-bdf5-c0624fde090a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [de26c086-06b7-4161-bdf5-c0624fde090a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014215744s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-818613 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-818613 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-818613 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-818613 --alsologtostderr -v=3
E1212 22:38:32.987410   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-818613 --alsologtostderr -v=3: (11.84894594s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818613 -n old-k8s-version-818613
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818613 -n old-k8s-version-818613: exit status 7 (89.665056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-818613 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (419.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-818613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-818613 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m59.347726416s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-818613 -n old-k8s-version-818613
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (419.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-029809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-029809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (41.758873102s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-876167 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-876167 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m10.542716035s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-029809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3dab0ac8-3aed-43c6-837d-e41ce5a513e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3dab0ac8-3aed-43c6-837d-e41ce5a513e8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.015764183s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-029809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-029809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-029809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-029809 --alsologtostderr -v=3
E1212 22:40:04.710471   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-029809 --alsologtostderr -v=3: (11.908245245s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-029809 -n embed-certs-029809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-029809 -n embed-certs-029809: exit status 7 (94.700497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-029809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-029809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-029809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m43.678993126s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-029809 -n embed-certs-029809
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-876167 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72b1ddfd-4458-43f2-b330-62bbd02056c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72b1ddfd-4458-43f2-b330-62bbd02056c1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.013679997s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-876167 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-876167 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-876167 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-876167 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-876167 --alsologtostderr -v=3: (11.886606864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167: exit status 7 (80.099364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-876167 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-876167 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1212 22:41:24.302408   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
E1212 22:43:07.758005   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
E1212 22:43:32.987506   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/ingress-addon-legacy-036387/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-876167 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m38.65237199s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vkkxq" [fa0867e4-0ef4-40ee-ba07-abd4fc9ab317] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vkkxq" [fa0867e4-0ef4-40ee-ba07-abd4fc9ab317] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.016112151s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vkkxq" [fa0867e4-0ef4-40ee-ba07-abd4fc9ab317] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008947887s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-544020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-544020 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-544020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-544020 -n no-preload-544020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-544020 -n no-preload-544020: exit status 2 (293.550735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-544020 -n no-preload-544020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-544020 -n no-preload-544020: exit status 2 (298.596127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-544020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-544020 -n no-preload-544020
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-544020 -n no-preload-544020
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-778541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-778541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (35.769283267s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-778541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-778541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094225826s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-778541 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-778541 --alsologtostderr -v=3: (2.013608796s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-778541 -n newest-cni-778541
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-778541 -n newest-cni-778541: exit status 7 (84.445578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-778541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-778541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-778541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (25.741431337s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-778541 -n newest-cni-778541
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-778541 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-778541 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-778541 -n newest-cni-778541
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-778541 -n newest-cni-778541: exit status 2 (293.688042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-778541 -n newest-cni-778541
E1212 22:45:04.710449   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/addons-818905/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-778541 -n newest-cni-778541: exit status 2 (295.153454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-778541 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-778541 -n newest-cni-778541
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-778541 -n newest-cni-778541
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (44.9965877s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gc9lc" [3404d2f4-6506-46c4-b948-e9aef8b7c536] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015356073s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gc9lc" [3404d2f4-6506-46c4-b948-e9aef8b7c536] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009732772s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-818613 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-818613 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-818613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818613 -n old-k8s-version-818613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818613 -n old-k8s-version-818613: exit status 2 (313.985434ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818613 -n old-k8s-version-818613
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818613 -n old-k8s-version-818613: exit status 2 (322.96397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-818613 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-818613 -n old-k8s-version-818613
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-818613 -n old-k8s-version-818613
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.458957457s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kcld8" [bbac9030-7041-4719-924d-045138a45447] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kcld8" [bbac9030-7041-4719-924d-045138a45447] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00881466s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lhlds" [dec7e2cc-c7c0-465f-a255-405e4085aebc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021210189s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lhlds" [dec7e2cc-c7c0-465f-a255-405e4085aebc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008996471s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-029809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-029809 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-029809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-029809 -n embed-certs-029809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-029809 -n embed-certs-029809: exit status 2 (308.151739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-029809 -n embed-certs-029809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-029809 -n embed-certs-029809: exit status 2 (300.429172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-029809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-029809 -n embed-certs-029809
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-029809 -n embed-certs-029809
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.78285545s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1212 22:46:24.302824   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.167741289s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-29s8v" [04aa78bc-a083-4076-8615-3c87d3702e6f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-29s8v" [04aa78bc-a083-4076-8615-3c87d3702e6f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.057208644s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-flmm4" [cdabdb3e-3875-4eac-b27d-825734954637] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021540547s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vw5cx" [f6542073-19ad-4f18-9c72-bfdf3ec5965e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vw5cx" [f6542073-19ad-4f18-9c72-bfdf3ec5965e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008466847s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-29s8v" [04aa78bc-a083-4076-8615-3c87d3702e6f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008795874s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-876167 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-876167 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-876167 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167: exit status 2 (288.66317ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167: exit status 2 (331.63807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-876167 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-876167 -n default-k8s-diff-port-876167
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k228r" [e208e121-0a06-41b4-939b-4373244cbe24] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024839229s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8j9fl" [8928c04d-c4dc-47a3-852b-a4901ecec5fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8j9fl" [8928c04d-c4dc-47a3-852b-a4901ecec5fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.012261092s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5gmcp" [91990c54-a327-42b0-a3b6-b0919aff7114] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5gmcp" [91990c54-a327-42b0-a3b6-b0919aff7114] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.009132833s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (44.516336094s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1212 22:47:39.706592   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/no-preload-544020/client.crt: no such file or directory
E1212 22:47:40.987245   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/no-preload-544020/client.crt: no such file or directory
E1212 22:47:43.548000   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/no-preload-544020/client.crt: no such file or directory
E1212 22:47:48.669002   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/no-preload-544020/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.204561713s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1212 22:47:58.909129   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/no-preload-544020/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-797711 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.771571408s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jdt7r" [011b9799-25a1-401d-ab75-e03fb2b6b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jdt7r" [011b9799-25a1-401d-ab75-e03fb2b6b6ee] Running
E1212 22:48:15.883607   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:15.888911   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:15.900038   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:15.920441   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:15.960881   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:16.041447   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:16.202473   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:16.523184   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:17.164134   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
E1212 22:48:18.444736   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/old-k8s-version-818613/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.009650695s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-79xqh" [cffbd6e0-54cd-4396-a93d-dc632abc92d7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016010531s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5r9nv" [07bfa882-6970-4a9f-ac77-d02a57f559c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5r9nv" [07bfa882-6970-4a9f-ac77-d02a57f559c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008306022s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-797711 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-797711 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xdclx" [d2c93343-bcdc-48e4-b24d-5eb6e4da13c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xdclx" [d2c93343-bcdc-48e4-b24d-5eb6e4da13c1] Running
E1212 22:49:27.345898   16399 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/functional-355715/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007829278s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-797711 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-797711 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-843719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-843719
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-797711 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:35:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-313893
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:35:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-856114
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-826077
contexts:
- context:
cluster: force-systemd-env-313893
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:35:18 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-env-313893
name: force-systemd-env-313893
- context:
cluster: kubernetes-upgrade-856114
user: kubernetes-upgrade-856114
name: kubernetes-upgrade-856114
- context:
cluster: pause-826077
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-826077
name: pause-826077
current-context: force-systemd-env-313893
kind: Config
preferences: {}
users:
- name: force-systemd-env-313893
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/force-systemd-env-313893/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/force-systemd-env-313893/client.key
- name: kubernetes-upgrade-856114
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.key
- name: pause-826077
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-797711

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-797711"

                                                
                                                
----------------------- debugLogs end: kubenet-797711 [took: 3.433079014s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-797711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-797711
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-797711 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-797711" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:35:02 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-856114
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17761-9643/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-826077
contexts:
- context:
cluster: kubernetes-upgrade-856114
user: kubernetes-upgrade-856114
name: kubernetes-upgrade-856114
- context:
cluster: pause-826077
extensions:
- extension:
last-update: Tue, 12 Dec 2023 22:34:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-826077
name: pause-826077
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-856114
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/kubernetes-upgrade-856114/client.key
- name: pause-826077
user:
client-certificate: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.crt
client-key: /home/jenkins/minikube-integration/17761-9643/.minikube/profiles/pause-826077/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-797711

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-797711" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797711"

                                                
                                                
----------------------- debugLogs end: cilium-797711 [took: 5.670964491s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-797711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-797711
--- SKIP: TestNetworkPlugins/group/cilium (5.84s)

                                                
                                    
Copied to clipboard